Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> 70% of CEOs say that they hope to be able to lay people off and replace them with an LLM soon

Is the layoff-based business model really the best use case for AI systems?

> The surface area for attacks against LLM agents is absolutely colossal, and I’m not confident that the problems can be fixed.

The flaws are baked into the training data.

"Trust but verify" applies, as do Murphy's law and the law of unintended consequences.




> Is the layoff-based business model really the best use case for AI systems?

No, but it plays well in quarterly earnings. So expect a good bit, I'd say.

> The flaws are baked into the training data.

Not in terms of attack area.

In terms of hallucinations... ish. It's more baked into the structure than the training data. There's increasing amounts of work to ensure well-curated training data. Still, the whole idea of probabilistic selection of tokens more or less guarantees it'll go wrong from time to time. (See e.g. https://arxiv.org/abs/2401.11817)

In terms of safety, there's some progress. CaMeL is rather interesting: https://arxiv.org/abs/2503.18813

> "Trust but verify" applies.

FWIW, that applies to humans too. Nobody is error-free. And so the question becomes mostly "does the value overshadow the cost of the error", as always.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: