Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>How so? Sorry, but to categorically affirm such bold claim you need more substance.

It's not a bold claim, it's a trivial claim. No existing learning system can reason. If it could we could hand it axioms and it could do the same deductive reasoning that you can. By definition and how these systems work, all they do is generalize from particular examples, and you can't do maths by empirically looking for numbers in a set of data, because there's an infinite amount of them. That's why ChatGPT spits out nonsense if you give it two large numbers two multiply, but your TI-83 does it perfectly on a battery. We can program deterministic, deductive methods into machines, we have no machine that develops them.

Now is there some architecture that can at some point learn how to do more than statistical prediction? Sure but this isn't it. Passing the Turing test and making people feel excited has little to do with what actual intelligence a system has. The Mechanical Turk in the 18th century fooled people, but it wasn't a intelligent machine, it was an elaborate contraption that mimics human behavior. And people understandably conflate the latter with the former.



I don’t consider ChatGPT to be self aware. I think most people don’t. No one, however, can pinpoint exactly why.

The vast majority of the population can’t deduce anything from axioms or even multiply large numbers. Yet most won’t doubt their self awareness.

If you’re using “self-evident” and “trivial” when discussing AI, conscious, intelligence, you’re fooling yourself.


Picture a line—say, the line defined by y=2x + 3.

Now, tell me what's contained inside that line. Not "what does it mean" or "what's it made up of"; "what is contained inside it?"

The question doesn't make sense. There's no "inside" of a line. It's a one-dimensional mathematical construct. It fundamentally cannot "contain" anything.

"What's inside the line?" is a similar question to "Is ChatGPT self-aware?"—or, more aptly, to "What is ChatGPT thinking?" It's a static mathematical construct, and thinking is an active process. ChatGPT fundamentally cannot be said to be thinking, experiencing, or doing any of the other things that would be prerequisites for self-awareness, however you want to define that admittedly somewhat nebulous concept. Thus, to even ask the question "Why don't you think ChatGPT is self-aware?" doesn't make sense. It's not that far different from asking asking "Why don't you think your keyboard/pencil/coffee mug is self-aware?"

The intelligence of all humans is roughly analogous in ability—even if a given human has not learned to do formal logical deduction and inference, the fundamental structure and processing of the human brain is unquestionably capable of it, and most humans do so informally with no training at all.

Attempting to cast doubt on the human ability to reason, to comprehend, and to synthesize information beyond mere stochastic prediction reflects a very naïve, surface-level view of humans and cognition, and one that has no grounding in modern psychology or neuroscience. Your continued insistence, through several sub-threads, that we cannot be sure we are any better than ChatGPT is very much an extraordinary claim, and you have provided no evidence to support it beyond "I can't imagine a proof that we are not."

Maybe go do some research on how our brains actually work, and then come back and tell us if you still think we're all just predictive chatbots.


Haha, yeah. OK internet stranger. Take a deep breath and perhaps consider why questioning the certainties you so dearly hold throws you into an ad hominem fallacy.

Maybe check if the consensus on neuroscience is that the brain is definitely-for-sure-certainly not a predictive machine while you’re at it.


I work for a psychology and neuroscience department, and have done for over a decade now.

Does that give me genuine academic credentials? Pff, no.

Does it mean I have a reasonable high-level grounding in modern understanding of how the brain works? Yes, it does.

Again: You have made an extraordinary claim. You need to provide extraordinary evidence to support it, not just toss accusations of ad-hominem attacks at anyone who points out how thin your argument is.


I have claimed nothing, only cast doubt on the baseless certainties that have been floating around this subject.

To summarize what I'm trying to convey:

- (Variations of the original claim) ChatGPT works nothing like the brain, it's just a prediction machine

- (Me all over this thread) Do we know how the brain works? Do we know what properties may emerge from a blackbox LLM?

If you think questioning awareness is extraordinary (or new) I advise you to read Descartes, the Boltzmann brain thought experiment and epistemology in general.

PS: you replied with credentials rather than arguments, which is still on the logical fallacy ground.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: