Human language is far from perfect as a cognitive tool but still serves us well because it is not foundational. We use it both for communication and some reasoning/planning as a high level layer.
I strongly believe that human language is too weak (vague, inconsistent, not expressive enough etc.) to replace interactions with the world as a basis to build strong cognition.
We're easily fooled by the results of LLM/LRM models because we typically use language fluency and knowledge retrieval as a proxy benchmark for intelligence among our peers.
Human language is more powerful than its surface syntax or semantics: it carries meaning beyond formal correctness. We often communicate effectively even with grammatically broken sentences, using jokes, metaphors, or emotionally charged expressions. This richness makes language a uniquely human cognitive layer, shaped by context, culture, and shared experience. While it's not foundational in the same way as sensorimotor interaction, it is far more than just a high-level communication tool.
I agree that language is even more useful as a cognitive tool than as a communication medium.
But that is not my point. The map is not the territory, and this map (language) is too poor to build something that is going to give more than what it was fed with.
Agree with this. Human language is also not very information-dense; there is a lot of redundancy and uninformative repetition of words.
I also wonder about the compounding effects of luck and survivorship bias when using these systems. If you model a series of interactions with these systems probabilistically, as a series of failure/success modes, then you are bound to get a sub-population of users (of LLM/LLRMs) that will undoubtedly have “fantastic” results. This sub-population will then espouse and promote the merits of the system. There is clearly something positive these models do, but how much of the “success” is just luck.
Language mediates those interactions with the world. There is no unmediated interaction with the world. Those moments when one feels most directly in contact with reality, that is when one is so deep down inside language that one cannot see daylight at all.
I don't know about you, but as far as I can tell I mediate and manipulate the world with my body and senses without necessarily using language. In fact, I can often do both at once, for example, thinking about something entirely unrelated while jogging, and still making physical decisions and actions without invoking language at all. Plus, animals (especially lower order like amoebas) also mediate with the world without needing language.
As far as we can tell without messing with complex experiental concepts like qualia and the possibility of philosophical zombies, language mainly helps higher order animals communicate with other animals and (maybe) keep a train of thought, though there are records of people that say that they don't. And now also it allows humans talk to LLMs.
But I digress, I would say this is an open academic debate. Suggesting that there is always language deep down is speculation.
I strongly believe that human language is too weak (vague, inconsistent, not expressive enough etc.) to replace interactions with the world as a basis to build strong cognition.
We're easily fooled by the results of LLM/LRM models because we typically use language fluency and knowledge retrieval as a proxy benchmark for intelligence among our peers.