Hacker Newsnew | past | comments | ask | show | jobs | submit | willybraun's commentslogin

The argument is that most AI systems fail socially, not technically, because they don’t design the loop that compounds trust over time: transparent boundaries, recoverable errors, and feedback that feels fair to users.

Curious what people here think: does this “trust design” framing resonate with your experience?

Have you seen teams that intentionally engineer trust into their rollout process? Or is this more often a side effect of limited resources in early-stage startups rather than a deliberate constrained strategy?


Nope, I used to believe the hype that it was anti-intellectual to say its sucks and it’s useless, or that “this is the worst it will ever be”. but I think sentiment is changing.

But I think the best example is - the phone was invented in 1876. A few years later, there could be a crazy visionary who imagined the iPhone and started iterating, and poured billions of dollars into it and maybe pulled the date in a few years? Maybe we’d get it in 2005 instead of 2008? Look at babbages original computers. If he’d committed to them, would we have gotten the first digital computer dramatically earlier? I assert no, that yes, we can pour billions or even trillions of dollars into this technology, but it’s probably just too early to do so, and in fact, it may make the trust problem worse.

Also, just for clarities sake - I’m drawing a line in the sand and making broad statements, there’s obviously more nuance. There are plenty of uses for the current iteration of AI and obviously slowly iterating is a good strategy. I think what I would say is that we are iterating too quickly. There’s a reason that most computer usage was relegated to “nerds” for a long time. Same with the internet, but we’re not rolling out AI to a few small groups of nerds, we’re rolling it out to everyone, and that won’t dramatically increase long-term adoption or utility. Also, I do think there may actually be a chance that it gets worse before it gets better.


AI startups face “reset risk”: each model breakthrough can instantly obsolete existing players. Standard SaaS metrics (CAC/LTV, retention) don’t capture this. The piece proposes a 4-layer framework to assess real defensibility: 1. Tech foundation – real performance delta, improvement velocity, and data edge. 2. Product depth – how deeply it solves the user’s job, embeds in workflow, and operates autonomously. 3. Market positioning – timing of capability thresholds, vertical vs horizontal strategy, and market structure. 4. Sustainability tests – what happens if GPT-N drops tomorrow, how costly it is to switch, and whether new users still come.


insightful, thanks


amazing. congratz!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: