> Is the layoff-based business model really the best use case for AI systems?
No, but it plays well in quarterly earnings. So expect a good bit, I'd say.
> The flaws are baked into the training data.
Not in terms of attack area.
In terms of hallucinations... ish. It's more baked into the structure than the training data. There's increasing amounts of work to ensure well-curated training data. Still, the whole idea of probabilistic selection of tokens more or less guarantees it'll go wrong from time to time. (See e.g. https://arxiv.org/abs/2401.11817)
FWIW, that applies to humans too. Nobody is error-free. And so the question becomes mostly "does the value overshadow the cost of the error", as always.
Is the layoff-based business model really the best use case for AI systems?
> The surface area for attacks against LLM agents is absolutely colossal, and I’m not confident that the problems can be fixed.
The flaws are baked into the training data.
"Trust but verify" applies, as do Murphy's law and the law of unintended consequences.