Roughly, actual intelligence needs to maintain a world model in its internal representation, not merely an embedding of language, which is a very different data structure and probably will be learned in a very different way. This includes things like:
- a map of the world, or concept space, or a codebase, etc
- causality
- "factoring" which breaks down systems or interactions into predictable parts
Language alone is too blurry to do any of these precisely.
It probably is a lot like that! I imagine it's a matter of specializing the networks and learning algorithms to converge to world-model-like-structures rather than language-like-ones. All these models do is approximate the underlying manifold structure, just, the manifold structure of a causal world is different from that of language.
> Roughly, actual intelligence needs to maintain a world model in its internal representation
This is GOFAI metaphor-based development, which never once produced anything useful. They just sat around saying things like "people have world models" and then decided if they programmed something and called it a "world model" they'd get intelligence, it didn't work out, but then they still just went around claiming people have "world models" as if they hadn't just made it up.
An alternative thesis "people do things that worked the last time they did them" explains both language and action planning better; eg you don't form a model of the contents of your garbage in order to take it to the dumpster.
I see no reason to believe an effective LLM-scale "world-modeling" model would look anything like the kinds of things previous generations of AI researchers were doing. It will probably look a lot more like a transformer architecture--big and compute intensive and with a fairly simple structure--but with a learning process which is different in some key way that make different manifold structures fall out.
I thought you were making an entirely different point with your link since the lag caused the page to view just the upskirt render until the rest of the images loaded in and it could scroll to the reference of your actual link
Anyway, I don't think that's the flex you think it is since the topology map clearly shows the beginning of the arrow sitting in the river and the rendered image decided to hallucinate a winding brook, as well as its little tributary to the west, in view of the arrow. I am not able to decipher the legend [that ranges from 100m to 500m and back to 100m, so maybe the input was hallucinated, too, for all I know] but I don't obviously see 3 distinct peaks nor a basin between the snow-cap and the smaller mound
I'm willing to be more liberal for the other two images, since "instructions unclear" about where the camera was positioned, but for the topology one, it had a circle
I know I'm talking to myself, though, given the tone of every one of these threads
What I mean is that the current generation of LLMs don’t understand how concepts relate to one another. Which is why they’re so bad at maths for instance.
Markov chains can’t deduce anything logically. I can.
A consequence of this is that you can steal a black box model by sampling enough answers from its API because you can reconstruct the original model distribution.
The definition of 'Markov chain' is very wide. If you adhere to a materialist worldview, you are a Markov chain. [Or maybe the universe viewed as a whole is a Markov chain.]
> Which is why they’re so bad at maths for instance.
I don't think LLMs currently are intelligent. But please show a GPT-5 chat where it gets any math problem wrong, that most "intelligent" people would get right.
It wouldn't matter if they are both right. Social truth is not reality, and scientific consensus is not reality either (just a good proxy of "is this true", but its been shown to be wrong many times - at least based on a later consensus, if not objective experiments).
For one thing, I have internal state that continues to exist when I'm not responding to text input; I have some (limited) access to my own internal state and can reason about it (metacognition). So far, LLMs do not, and even when they claim they are, they are hallucinating https://transformer-circuits.pub/2025/attribution-graphs/bio...
Very likely a human born in sensory deprivation would not develop consciousness as we understand it. Infants deprived of socialization exhibit severe developmental impairment, and even a Romanian orphanage is a less deprived environment than an isolation chamber.
Human brains are not computers. There is no "memory" separate from the "processor". Your hippocampus is not the tape for a Turing machine. Everything about biology is complex, messy and analogue. The complexity is fractal: every neuron in your brain is different from every other one, there's further variation within individual neurons, and likely differential expression at the protein level.