Isn’t that a metric we use to determine the intelligence of animals and such?
Does GPT love its creators the way we love our parents? Does it mourn a loss the way we do?
Elephants are known to be of the more intelligent animals and apparently they mourn loss as well. So apparently there is something there linked with intelligence.
Maybe it's not directly intelligence related, but what makes a human have a spontaneous emotional response like laughter for instance?
The self-reflection link you sent isn't what I was thinking.
I was thinking about things like self reflection on my existence. Like am I happy doing what I'm doing? Do I feel I am making a positive difference in the world? If not what should I start doing to work toward becoming what I want to be?
The fact that I want to be something seems different. Nobody tells me what I want to be. But we still need to tell computers an awful lot about what they should be and what they should want to be. When will AI be able to decide for itself what it wants to be and what it wants to do? And to do all that sort of self reflection on it's own without us specifying a reward system that "makes" it want to do that? I don't strive to become a better in the world person for some reward.
> Isn’t that a metric we use to determine the intelligence of animals and such?
Depends what you mean by those words.
"Do they pass a mirror test" is, as far as I know, the best idea anyone's had so far for self awareness, and that's trivial to hard-code as a layer on top of whatever AI you have, so it's probably useless for this.
> Does GPT love its creators the way we love our parents? Does it mourn a loss the way we do?
I'd be very surprised if it did, given it wasn't meant to, but then again it wasn't "meant" to be able to create unit tests for a Swift JSON parser when instructed in German either, and yet it can.
Trouble is… how would you test for it? Do we have more than the vaguest idea how emotions work in our own heads to compare the AI against, or are we mysteries unto ourselves?
Or can we only guess the inner workings from the observable behaviours, like we do with elephants? But "observable behaviour" is something that a VHS can pass if it's playing back on a TV, and we shouldn't want to count that. Is an LLM more like a VHS tape or like an elephant? I don't know.
LLMs are certainly alien to how we think; as it's been explained to me, I think it should be thought of as a ferret that was made immortal, then made to spend a few tens of thousands of years experiencing random webpages where each token is represented as the direct stimulation of an olfactory nerve, and it is given rewards and punishments based on if it correctly imagines the next scent.
> I don't strive to become a better in the world person for some reward.
Feeling good about yourself is a reward, for most of us.
But I think this is a big difference between us and them: as I understand it, most AI have only one reward in training, while humans have many which vary over our lives — as an infant it may be pain va. food or smiles vs. loud noises, as a child it may be parental feedback, as an adolescent it may be peer pressure and sex, as a parent it may be the laughter vs. the cries of your child… but that's pop psychology on my part.
That’s not really a quantifiable measure, though. It’s a statement but not a falsifiable one, therefore not a good measure.
There’s an argument that the vast majority of human-generated research isn’t really “novel” but just derivative of other ideas. I’m not so sure ChatGPT couldn’t combine existing ideas to come up with something “novel” just like humans. I think there’s a case that it already comes up with creative, novel solutions in the drug space.
Do you understand what LLMs are? Of course they're not going to have original ideas because they're not alive and they are not intelligent. They're just dead-as-a-doornail algorithms.
I'm not disagreeing on that point. I'm pushing back on your almost mystic use of the term creativity because it's a claim that's not falsifiable. It's like saying "ChatGPT isn't conscious." I'd tend to agree, but the claim is a bad one because we can't even adequately define consciousness in order to test the claim.