Now, tell me what's contained inside that line. Not "what does it mean" or "what's it made up of"; "what is contained inside it?"
The question doesn't make sense. There's no "inside" of a line. It's a one-dimensional mathematical construct. It fundamentally cannot "contain" anything.
"What's inside the line?" is a similar question to "Is ChatGPT self-aware?"—or, more aptly, to "What is ChatGPT thinking?" It's a static mathematical construct, and thinking is an active process. ChatGPT fundamentally cannot be said to be thinking, experiencing, or doing any of the other things that would be prerequisites for self-awareness, however you want to define that admittedly somewhat nebulous concept. Thus, to even ask the question "Why don't you think ChatGPT is self-aware?" doesn't make sense. It's not that far different from asking asking "Why don't you think your keyboard/pencil/coffee mug is self-aware?"
The intelligence of all humans is roughly analogous in ability—even if a given human has not learned to do formal logical deduction and inference, the fundamental structure and processing of the human brain is unquestionably capable of it, and most humans do so informally with no training at all.
Attempting to cast doubt on the human ability to reason, to comprehend, and to synthesize information beyond mere stochastic prediction reflects a very naïve, surface-level view of humans and cognition, and one that has no grounding in modern psychology or neuroscience. Your continued insistence, through several sub-threads, that we cannot be sure we are any better than ChatGPT is very much an extraordinary claim, and you have provided no evidence to support it beyond "I can't imagine a proof that we are not."
Maybe go do some research on how our brains actually work, and then come back and tell us if you still think we're all just predictive chatbots.
Haha, yeah. OK internet stranger. Take a deep breath and perhaps consider why questioning the certainties you so dearly hold throws you into an ad hominem fallacy.
Maybe check if the consensus on neuroscience is that the brain is definitely-for-sure-certainly not a predictive machine while you’re at it.
I work for a psychology and neuroscience department, and have done for over a decade now.
Does that give me genuine academic credentials? Pff, no.
Does it mean I have a reasonable high-level grounding in modern understanding of how the brain works? Yes, it does.
Again: You have made an extraordinary claim. You need to provide extraordinary evidence to support it, not just toss accusations of ad-hominem attacks at anyone who points out how thin your argument is.
I have claimed nothing, only cast doubt on the baseless certainties that have been floating around this subject.
To summarize what I'm trying to convey:
- (Variations of the original claim) ChatGPT works nothing like the brain, it's just a prediction machine
- (Me all over this thread) Do we know how the brain works? Do we know what properties may emerge from a blackbox LLM?
If you think questioning awareness is extraordinary (or new) I advise you to read Descartes, the Boltzmann brain thought experiment and epistemology in general.
PS: you replied with credentials rather than arguments, which is still on the logical fallacy ground.
Now, tell me what's contained inside that line. Not "what does it mean" or "what's it made up of"; "what is contained inside it?"
The question doesn't make sense. There's no "inside" of a line. It's a one-dimensional mathematical construct. It fundamentally cannot "contain" anything.
"What's inside the line?" is a similar question to "Is ChatGPT self-aware?"—or, more aptly, to "What is ChatGPT thinking?" It's a static mathematical construct, and thinking is an active process. ChatGPT fundamentally cannot be said to be thinking, experiencing, or doing any of the other things that would be prerequisites for self-awareness, however you want to define that admittedly somewhat nebulous concept. Thus, to even ask the question "Why don't you think ChatGPT is self-aware?" doesn't make sense. It's not that far different from asking asking "Why don't you think your keyboard/pencil/coffee mug is self-aware?"
The intelligence of all humans is roughly analogous in ability—even if a given human has not learned to do formal logical deduction and inference, the fundamental structure and processing of the human brain is unquestionably capable of it, and most humans do so informally with no training at all.
Attempting to cast doubt on the human ability to reason, to comprehend, and to synthesize information beyond mere stochastic prediction reflects a very naïve, surface-level view of humans and cognition, and one that has no grounding in modern psychology or neuroscience. Your continued insistence, through several sub-threads, that we cannot be sure we are any better than ChatGPT is very much an extraordinary claim, and you have provided no evidence to support it beyond "I can't imagine a proof that we are not."
Maybe go do some research on how our brains actually work, and then come back and tell us if you still think we're all just predictive chatbots.