Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Some people will never be convinced that a machine demonstrates intelligence. This is because for a lot of people, intelligence exists a subjective experience that they have and the belief that others have it too is only inasmuch as others appear to be like the self.



It doesn’t mean they tie intelligence to subjective experience. Take digestion. Can a computer simulate digestion, yes. But no computer can “digest” if it’s just silicon in the corner of an office. There are two hurdles. The leap from simulating intelligence to intelligence, and the leap from intelligence to subjective experience. If the computer gets attached to a mechanism that physically breaks down organic material, that’s the first leap. If the computer gains a first person experience of that process, that’s the second.

You can’t just short-circuit from simulates to does to has subjective experience.

And the claim other humans don’t have subjective experience is such non-starter.


> And the claim other humans don’t have subjective experience is such non-starter.

There is no empirical test for the subjective experience of consciousness. You can't even prove to anybody else that you have it. We assume other people experience as we ourselves do as a basic decency we extend to other humans. This is a good thing, but it's essentially faith not science.

As for machines not having it, I'm fine with that assumption, but until there can be some sort of empirical test for it, it's not science. Thankfully, it's also not relevant to any engineering matter. Whether the machines have a subjective experience in any way comparable to our own doesn't touch any question about what demonstrable capabilities or limitations they have. We don't need to know if the computer has a ""soul"" to know if the computer can be a solution to any given engineering problem. Whether machines can have subjective experience shouldn't be considered an important question to engineers; let theologians waste their time fruitlessly debating that.


I think you're talking about consciousness rather than intelligence. While I do see people regularly distinguishing between simulation and reality for consciousness, I don't often see people make that distinction for intelligence.

> And the claim other humans don’t have subjective experience is such non-starter.

What about other primates? Other mammals? The smarter species of cephalopods?

Certain many psychopaths seem to act as if they have this belief.


It's called the AI effect: https://en.wikipedia.org/wiki/AI_effect

> The author Pamela McCorduck writes: "It's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, 'that's not thinking'."


The flip side of that is that every time a new AI approach becomes popular even more people proclaim "this is what thinking is", believing that the new technology reflects the underlying process of human intelligence. This phenomenon goes back further than AI as a formal discipline, to early computers, and even during the age of mechanical computers. There are parallels with robotics, where for centuries anything that could seemingly move like a human was perceived to be imbued with human-like qualities.[1] The human instinct to anthropomorphize is deep-seated and powerful.

I keep returning to this insight by a researcher of the Antikythera Mechanism, which in the context of ML seems even more apropos today than in 1986:

> I would like to conclude by telling a cautionary tale. Let us try and place the Antikythera Mechanism within the global context of ancient Greek thought. Firstly came the astronomers observing the motions of the heavenly bodies and collecting data. Secondly came the mathematicians inventing mathematical notation to describe the motions and fit the data. Thirdly came the technicians making mechanical models to simulate those mathematical constructions, like the Antikythera Mechanism. Fourthly came generations of students who learned their astronomy from these machines. Fifthly came scientists whose imagination had been so blinkered by generations of such learning that they actually believed that this was how the heavens worked. Sixthly came the authorities who insisted upon the received dogma. And so the human race was fooled into accepting the Ptolemaic system for a thousand years.

> Today we are in danger of making the same mistake over computers. Our present generation is able to view them with an appropriate skepticism when necessary. But our children's children may be brought up within a society dominated by computers, that they may actually believe this is how our brains work. We do not want the human race to be fooled again for another thousand years.

-- E.C. Zeeman, Gears from the Greeks, January 1986, http://zakuski.utsa.edu/~gokhman/ecz/gears_from_the_greeks.p...

I also regularly return to Richard Stallman's admonition regarding the use of the term, intellectual property. He deeply disliked that term and argued it was designed to obfuscate, through self-serving[2] equivocations, the legal principles behind the laws of copyright, patent, trademark, trade secret, etc.

Contemporary machine learning may rightly be called artificial intelligence, but to conflate it with human intelligence is folly. It's clearly not human intelligence. It's something else. The same way dolphin intelligence isn't human intelligence, or a calculator isn't human intelligence. These things may be able to tell us something about the contours and limits of human intelligence, especially in contrast, but equivocations or even simple direct comparisons only serve to obfuscate and constrain how we think of intelligence.

[1] See, e.g., the 1927 film Metropolis, which played off prevailing beliefs and fears about the progress and direction of actuated machines.

[2] Serving the interests of those who profit the most from expanding the scope and duration of these legal regimes by obfuscating the original intent and design behind each regime, replacing them with concepts and processes that favored expansion.


> Contemporary machine learning may rightly be called artificial intelligence, but to conflate it with human intelligence is folly. It's clearly not human intelligence. It's something else. The same way dolphin intelligence isn't human intelligence, or a calculator isn't human intelligence. These things may be able to tell us something about the contours and limits of human intelligence, especially in contrast, but equivocations or even simple direct comparisons only serve to obfuscate and constrain how we think of intelligence.

This is something I mostly agree with. One quibble:

The process in LLMs clearly differs from human intelligence, but lumping it in with a the intelligence of a calculator is, IMO, making a mistake in the opposite direction.


> It's clearly not human intelligence

I don't think argument by assertion is appropriate where there's a lot of people who "clearly" believe that it's a good approximation of human intelligence. Given we don't understand how human intelligence works, asserting that one plausible model (a continuous journey through an embedding space held in neurons) that works in machines isn't how humans do it seems too strong.


It is demonstrably true that artificial neurons have nothing to do with cortical neurons in mammals[1] so even if this model of human intelligence is useful, transformers/etc aren't anywhere close to actually implementing the model faithfully. Perhaps by Turing completeness o3 or whatever has stumbled into a good implementation of this model, but that is implausible. o3 still wildly confabulates worse than any dementia patient, still lacks the robust sense of folk physics we see in infants, etc. (This is even more apparent in video generators, Veo2 is SOTA and it still doesn't understand object permanence or gravity.) It does not make sense to say something is a model of human intelligence if it can do PhD-level written tasks but is outsmarted by literal babies (also chimps, dogs, pigeons...)

AI people toss around the term "neuron" way too freely.

[1] https://www.quantamagazine.org/how-computationally-complex-i...


> somebody figured out how to make a computer do something

Well, I would argue that in most deterministic AI systems the thinking was all done by the AI researchers and then encoded for the computer. That’s why historically it’s been easy to say, “No, the machine isn’t doing any thinking, but only applying thinking that’s embedded within.” I think that line of argument becomes less obvious when you have learning systems where the behavior is training dependent. It’s still fairly safe to argue that the best LLMs today are not yet thinking, at least not in a way a human does. But in another generation or two? It will become much harder to deny.


In many ways LLMs are a regression compared to what was before. They solve a huge class of problems quickly and cheaply, but they also have severe limitations that older methods didn't have.

So no, it's not a linear progress story like in a sci-fi story.


> It’s still fairly safe to argue that the best LLMs today are not ... thinking

I agree completely.

> But in another generation or two? It will become much harder to deny.

Unless there is something ... categorically different about what an LLM does and in a generation or two we can articulate what that is (30 years of looking at something makes it easier to understand ... sometimes).


Intelligence requires agency


> It’s still fairly safe to argue that the best LLMs today are not yet thinking, at least not in a way a human does. But in another generation or two? It will become much harder to deny.

Current LLMs have a hard division between training and inference time; human brains don’t-we train as we infer (although we probably do a mix of online/offline training: you build new connections while awake, but then pruning and consolidation happens while you sleep). I think softening the training-vs-inference division is a necessary (but possibly not sufficient) condition for closing the artificial-vs-human intelligence gap. But that softening is going to require completely different architectures from current LLMs, and I don’t think anyone has much of an idea what those new architectures will look like, or how long it will take for them to arrive


> 'that's not thinking'."

Do LLMS think? Of course they do. But thinking =/= intelligence.

Its pretty easy to define what an AI actually would look like:

A human coder sits down and writes an algorithm. In that algorithm, there is no reference to any specific piece of information on ANYTHING (including human words), whether its manually written in code or derived through training a neural net on that information and the code is just a bunch of matrix multiplies.

The algorithm has 2 interfaces - a terminal for a human to interact with, and an api to a tcp socket over which it can communicate to the world wide web.

A human could give this algorithm an instruction, like for example, "Design and build me a flying car and put it in my driveway and do not spend a single cent of my money, and do everything legally".

Provided there are no limits on communication that would result in the algorithm being perma banned of the internet, the algorithm prior to even tackling the task at hand will have to do the following at the least:

- figure out how to properly structure HTTP communication to be able to talk to servers, and essentially build an internal API.

- figure out what the words you typed mean - i.e map them to information collected from the web and

- start running internal simulations to figure out what the best course of action is

- figure out how to deal with ambiguity and ask you questions (like "how far do you want to fly"), and figure out how to deal with dead ends.

- start executing actions with preplanned risk (figuring out what risk is in the process) and learn from mistakes.

And that's just the short start.

But the key factor is that this same process that it uses to figure basic functionality is the same process (at least on the lowest level) that it would use to start designing a flying car once it has all the information it needs to "understand" the task.

And there isn't anything even remotely close on the horizon with any of the current AI research that indicates that we have any idea what that process looks like. The only claims that we can make is that its definitely recursive, not fully forward like LLMs, and its essentially a search algorithm. But what its searching and what the guidance metric is for search direction is the mystery.


> every time somebody figured out how to make a computer do something

Well, there’s the clue it is not really thinking if somebody told the machine how to do things. My Roomba isn’t intelligent because it’s been programmed to clean the floor, now is it?

Wake me up when machines learn to do something on their own. I know everybody is on the AI hype train, but please show your extraordinary evidence to your extraordinary claims first.


Who's making extraordinary claims here?


I think they referred to the claim that AIs playing checkers should be considered thinking.


> Wake me up when machines learn to do something on their own.

Be careful what you wish for if only just this once


This is why I want the field to go straight to building indistinguishable agents- specifically, you should be able to video chat with an avatar that is impossible to tell from a human.

Then we can ask "if this is indistinguishable from a human, how can you be sure that anybody is intelligent?"

Personally I suspect we can make zombies that appear indistinguishable from humans (limited to video chat; making a robot that appears human to a doctor would be hard) but that don't have self-consciousness or any subjective experience.


Thats not really intelligence though.


"There is considerable overlap between the intelligence of the smartest bears and the dumbest tourists."

LLMs are not artificial intelligence but artificial stupidity.

LLMs will happily hallucinate. LLMs will happily tell you total lies with complete confidence. LLMs will give you grammatically perfect completely vapid content. etc.

And yet that is still better than what most humans could do in the same situation.

We haven't proved that machines can have intelligence, but instead we are happily proving that most people, most of the time just aren't very intelligent at all.


> LLMs will happily hallucinate. LLMs will happily tell you total lies with complete confidence.

Perhaps we should avoid anthropomorphizing them too much. LLMs don't inhabit a "real world" where they can experiment and learn. Their training data is their universe, and it's likely filled with conflicting, peculiar, and untestable information.

Yes, the output is sometimes "a lie" if we apply it to our world, but in "their world" stuff is might be just strangely different. And it's not like the real world has only "hard simple truths" - quantum mechanics comes to mind about how strange stuff can be.


> yet that is still better than what most humans could do in the same situation

Yup. A depressing takeaway from LLMs is most humans don’t demonstrate a drive to be curious and understand, but instead, to sort of muddle through most (economic) tasks.


Humans are basically incapable of recognizing that there’s something that’s more powerful than them

They’re never going to actively collectively admit that that’s the case, because humans collectively are so so systematically arrogant and self possessed that they’re not even open to the possibility of being lower on the intelligence totem pole

The only possible way forward for AI is to create the thing that everybody is so scared of so they can actually realize their place in the universe


> Humans are basically incapable of recognizing that there’s something that’s more powerful than them

> They’re never going to actively collectively admit that that’s the case, because humans collectively are so so systematically arrogant and self possessed that they’re not even open to the possibility of being lower on the intelligence totem pole

For most of human history, the clear majority of humans have believed in God(s), spirits, angels, bodhisattvas, etc - beings which by definition are above us on the “totem pole” - and although atheism is much more widespread today, I think it almost certainly remains a minority viewpoint at the global level.

So I’m sceptical of your idea humans have some inherent unwillingness to believe in superhuman entities. From an atheist perspective, one might say that the (globally/historically) average human is so eager to believe in such entities, that if they don’t actually exist, they’ll imagine them and then convince themselves that their imaginings are entirely real. (Whereas, a theist might argue that the human eagerness to believe in such entities is better explained by their existence.)


Religious people - depending on type - use the God as cosplay or as some kind of deus ex machina that they are merged with in their mind.

Ask a deeply religious person about the separation between themselves and God and each tradition has their own version of being one with God or that they’ll become God etc…

It’s all the same because it’s ultimately about their relationship with how they define their God - having a “religious experience” where you realize “you are a part of god and get benefit etc..” is frankly a requirement to be a religion

In part that’s why it’s hard to call some versions of Buddhism religions and even harder for eg Hindus


Are you really that misanthropic that you think we need AI to show us how meaningless we are?


I’m curious as to how you derive your position of “misanthropic”

The answer to your question though is I do not believe that humans have at any point in history prevented an existential threat from actually being realized before it was realized


I will not be convinced a machine demonstrates intelligence until someone demonstrates a robot that can navigate 3D space as intelligently as, say, a cockroach. AFAICT we are still many years away from this, probably decades. A bunch of human language knowledge and brittle heuristics doesn't convince me at all.

This ad hominem is really irritating. People have complained since Alan Turing that AI research ignores simpler intelligence, instead trying to bedazzle people with fancy tricks that convey the illusion of human intelligence. Still true today: lots of talk about o3's (exaggerated) ability to do fancy math, little talk about its appallingly bad general quantitative reasoning. The idea of "jagged AI" is unscientific horseshit designed to sweep this stuff under the rug.


In the natural world, intelligence requires embodiment. And, depending on your point of view, consciousness. Modern AI exhibits neither of those characteristics.


How do they convince themselves that other people have intelligence too?


It is until proven otherwise because modern science still doesn’t have a consensus or standards or biological tests which can account for it. As in, highly “intelligent” people often lack “common sense” or fall prey to con artists. It’s pompous as shit to assert a black box mimicry constitutes intelligence. Wake me up when it can learn to play a guitar and write something as good as Bob Dylan and Tom Petty. Hint: we’ll both be dead before that happens.


I can't write something as good as Bob Dylan and Tom Petty. Ergo I'm not intelligent.


You have achieved enlightenment.

Now you no longer need to post here.


This to me is a weak argument. You have the ability to appreciate and judge something as good as Bob Dylan and Tom Petty. That's what makes you intelligent.


> This to me is a weak argument. You have the ability to appreciate and judge something as good as Bob Dylan and Tom Petty. That's what makes you intelligent.

What if you don't? Do you think that makes someone not intelligent?

Think about it for a second.


Yes. If you do not possess the potential ability to judge other human beings and/or their work, you lack intelligence.


1. I'm sure if I were to ask an LLM for opinions on Dylan and Petty, it would provide them.

2. I don't know if this was the point the original was making, but I personally think Dylan is a bit overrated as a songwriter (and the one time I saw him live, he was only so-so as a performer, but I don't think that's exactly a hot take).




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: