Someone described LLMs in the coding space as stone soup. So much stuff is being created around then to make them work better that at some point it feels like you'll be able to remove the LLM part of the equation
We cant deny the LLM has utility. You cant eat the stone but the LLM can implement design patterns for example.
I think this insistance on near autonomous agents is setting the bar too high, which wouldnt be an issue if these companies werent then insisting that the bar is set just right.
These things understand language perfectly, theyve solved NLP because thats what they model extremely well. But agentic stuff is modelled by reinforcement learning and until thats in the foundation model itself (at the token prediction level) these things have no real understanding of state spaces being a recursive function of action spaces and such stuff. And they cant autonomously code or drive or manage a fund until they do