Hacker News new | past | comments | ask | show | jobs | submit | more danenania's comments login

You can do state management easily enough with variables and a rerender function. React et al give you granular DOM updates for performance, but for a simple app it doesn't really matter.

> If AI makes everyone 10x engineers, you can 2x the productive output while reducing headcount by 5x.

Why wouldn't you just 10x the productive output instead?


I don't think it would be trivial to increase demand by 10x (or even 2x) that quickly. Eventually, a publicly traded company will get a bad quarter, at which point it's much easier to just reduce the number of employees. In both scenarios, there's no need for any new-hire.

I think there’s always demand for more software and more features. Have you ever seen a team without a huge backlog? The demand is effectively infinite.

Isn’t a lot of stuff in the backlog because it’s not important enough to the bottom line to prioritize?

Right, that’s kind of the whole point. If it’s in the backlog, someone thinks it’s valuable, but you might never get to it because of other priorities. If you’re 10x more productive, that line gets pushed a lot farther out, and your product addresses more people’s needs, has fewer edge case bugs, and so on.

If the competition instead uses their productivity boost to do layoffs and increase short term profits, you are likely to outcompete them over time.


Congrats guys—looking good!

For the managed service, how do you think about the N+1 request/query issue and latency with things like org membership checks and authz checks? This always pushes me to want this stuff in my db or at least on my side of the network line. Seems that tesseral is self-hostable which is awesome and could be a solution, but I’d probably rather just use the managed service if it wasn’t for this issue.


Since Tesseral's data model is that users belong-to organizations, anytime you have a user, an organization is also available to you (e.g. in the context of a JWT's claims, or an API call to `api.tesseral.com/v1/users/user_...`, etc):

For authz checks, you have a similar denormalization when you use Tesseral's RBAC. When a user gets an access token, those access tokens have a list of `actions` that the user is allowed to carry out. All of our SDKs have a `hasPermission` function that basically just `accessToken.actions.contains(...)`:

e.g. Go: https://pkg.go.dev/github.com/tesseral-labs/tesseral-sdk-go@...

Again in Go, here's the data type for access tokens:

https://pkg.go.dev/github.com/tesseral-labs/tesseral-sdk-go#... (organization lives in .Organization, list of permissions lives in .Actions)

So we do a little bit of denormalization whenever we mint an access token, but in exchange your code doesn't need to do any network hops to get an organization or do a permission check. (Access tokens are ES256-signed, and our SDKs handle caching the public keys, so that network hop is very infrequent.)


Oh nice, that seems ideal. Good stuff.

Management involves a lot of difficult tradeoffs, and this is one of them. Asking for feedback privately creates the feeling of "people talking behind your back" that can erode trust. But if you instead try to have full transparency and facilitate direct/open feedback, you can either have open conflicts/retaliation or (more likely), people hold back and don't give honest feedback, causing problems and resentments to never be addressed.

Great post.

Another benefit of this approach is it’s simply much easier. If you’re trying to act like some smooth corporate salesperson or be overly formal or whatever and that’s not really you, interacting with customers and prospects and… everyone… will feel tiring and painful.

But if you drop the pretense and just act like yourself? Minimal extra energy required. As a bonus, it opens you up to make real connections with people who you click with as you run your business.

So it works, it’s easier, and it’s more fun. And has basically no downsides. But still something that most founders seem to have to learn the hard way for some reason.


I agree with your comment and couldn’t agree more with this article. It’s solid advice for anyone just starting out with a product or service.

Speaking authentically and admitting you don’t have all the answers is genuine, not weak. That kind of honesty has always worked best for me.

People respond better to real conversations, concrete examples, and the feeling that you’re building with them, not just selling at them.

In my experience, working with smaller businesses has opened more doors than chasing big corporate clients. Smaller companies tend to be more curious, open to new ideas, and quick to take action.

That said, “dress to impress” can work, but in my experience, it’s often a short-lived win. It grabs attention, but rarely builds lasting trust or real traction. Not a playbook I buy into.

For example, I recently sat through a 3-hour pitch from a so-called “AI consultant.” The presentation was packed with buzzwords, vague promises, and a sleek slide deck. Every time someone asked how AI would actually solve a specific problem, the answer was basically: “AI will handle that,” followed by name-dropping a popular AI company like it was the solution to everything. It was clear the consultant didn’t fully understand the tech, but the leadership team still ate it up.

This article was a great reminder that trying to sound big and impressive might get attention early on, but it often backfires later. Being honest and straightforward has always been my real strength, even if it keeps me small.


Taleb has written about dressing to impress and looking the part: https://medium.com/incerto/surgeons-should-notlook-like-surg...


Thanks for sharing, packed with solid insights.


Paywalled? Needs a signup apparently.

I don't find it paywalled, but here is an archived version for you to read: https://archive.is/e4O1W

For a limited time they're also throwing in a Sharper Image lava lamp.


It might not be as many API calls as you think. Taking OpenAI as an example, if you're using the most expensive models like o3, gpt-4.5, o1-pro, etc. heavily for coding in large codebases with lots of context, you can easily spend hundreds per month, or even thousands.

So for now, the pro plans are a good deal if you're using one provider heavily, in that you can theoretically get like a 90% discount on inference if you use it enough. They are essentially offering an uncapped amount of inference.

That said, these companies have every incentive to gradually reduce the relative value offered by these plans over time to make them profitable, and they have many levers they can use to accomplish that. So in the long run, API costs and 'pro plan' costs will likely start to converge.


You can try my project Plandex[1] to use Gemini in a way that's comparable to Claude Code without copy-pasting. By default, it combines models from the major providers—Anthropic, OpenAI, and Google.

The default planning/coding models are still Sonnet 3.7 for context size under 200k, but you can switch to Gemini with `\set-model gemini-preview`.

1 - https://github.com/plandex-ai/plandex


Right, you have the same issues to consider when shipping a breaking major version upgrade to a new library in any language/ecosystem.

That said, you do see a cultural difference in node-land vs. many other ecosystems where library maintainers are much quicker to go for the new major version vs. iterating on the existing version and maintaining backward compatibility. I think that's what people are mostly really referring to when they complain about node/npm.

Webpack is a good example—what's it on now, version 5? If it was a Go project, it would probably still be on v1 or maybe v2. While the API arguably does get 'better' with each major version, in practice a lot of the changes are cosmetic and unnecessary, and that's what frustrates people I think. Very few people really care how nice the API of their bundler is. They just want it to work, and to receive security updates etc. Any time you spend on upgrading and converting APIs is basically a waste—it's not adding value for users of your software.


This very much. If I'm using your library, I've already committed to it's architecture and API with all its flaws. And my users don't care for the technology I use. Even if they're not that good, I can build wrapper over the worst aspects and then just focus on building features. New features are nice, but I'm more interested in getting security updates for the current version that I have.

When it's time to go for a refactoring, the trade-off between costs and returns are worth it as you can go for years between those huge refactors.


My tool Plandex[1] allows you to switch between automatic and manual context management. It can be useful to begin a task with automatic context while scoping it out and making the high level plan, then switch to the more 'aider-style' manual context management once the relevant files are clearly established.

1 - https://github.com/plandex-ai/plandex

Also, a bit more on auto vs. manual context management in the docs: https://docs.plandex.ai/core-concepts/context-management


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: