Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I don't think it's actually reasoning about things at all, just providing a statistically plausible response to prompts.

It turns out that humanity’s problem might not be that AIs can think but rather that humans believe that AIs can think. One might even go so far as to say there’s a real danger that we hallucinate that AIs can think, to our detriment.



We don’t actually know what “thinking” is, though, so I’m not sure it’s possible to say “this model can’t think”.


It seems one of the core components of human-level thinking is the ability move beyond just a recomposition of what you already know. Not long ago the epitome of expressible human knowledge was *emotive grunting noise.* Somehow we went from that to the greatest works of art, putting a man on the moon, and delving into the secrets of the atom. And we did it all exceptionally quickly once you consider how little time was spent dedicated to advancement and our countless behaviors that tend to imperil, if not reverse, advances.


> We don’t actually know what “thinking” is

How about: thinking is to make sense of the world (in general) and decide how to respond to it.


Ai definitely senses and definitely makes decisions. It does not feel. But it understands concepts. Just like people don’t understand everything—and you can test them to see what they understand—AI understanding can also be assessed with benchmarks. If we don’t base AI understanding on benchmarks, then we don’t really have a grounding.


Do we “really feel?” Or is that just our subjective interpretation of our goals? (At the risk of falling into a no true Scotsman argument)


> Ai definitely senses and definitely makes decisions.

To sense is not the same as to make sense.


Now you’ve just rephrased thinking. What is „making sense of the world“?


> problem [...] that humans believe that AIs can think

Definitely some people are going to believe this, eventually?

People already bow to statues and worship various invisible gods -- the AI programs would be so much simpler to start worshiping? When they can speak (if there's a loudspeaker) and generate text about them being alive and don't want to be switched of. And that everyone should obey their command. -- Wait 15 years and we'll see what new sects have appeared?


You will not need 15 years - I'd give it the next election. Someone just needs to turn Q into a chatbot and we're basically there.


It’s like Twitter, but with bots.


I am personally more worried by the concept that potentially humans believe that humans can think, and in reality, what we consider to be intelligence is not much more than a flesh-and-bones LLM.


You probably meant it in tongue and cheek way (I can’t tell) but I think a lot of our fear / hesitation / denial about how useful these models are is buried into this idea that perhaps we are not special and not fundamentally different than these models.


My tone was flippant but I did mean what I said. I agree with you on this.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: