I feel that the big hidden implication that these kinds of articles are trying to say is "AI is not real intelligence", further implying something along the lines of "AI will never be conscious" (as it's hard to come up with another definition of "real" intelligence except the-human-kind).
I'd like to propose a counterargument:
Assumptions: Theory of evolution is true. The primordial single-cell organism from which we all evolved was not conscious, but rather just a biological machine. Humans are conscious.
Deduction: Somewhere along the line between the primordial single-cell organism and a human being, there was a transition from non-consciousness to consciousness, and the only [*] factor in that transition was the complexity of the nervous system.
Conclusion: Consciousness (or, "real" intelligence) arises from the complexity of a machine. AI can, in principle, become conscious.
Yes, we know how AI works, because we built it. But why would that make consciousness arising from a sufficiently-complex statistical model impossible?
[*] as per apendleton's comment, I have made mistake here: complexity is not the only factor, but is a necessary one in creation of consciousness.
Asking "Is X conscious" is the wrong question, and is responsible for endless arguments with people talking past each other. The correct question to ask is: "What is X conscious of?" or put differently, "How much of the universe is modeled by X?"
Then we can see that humans are conscious of many things. Cats are conscious of fewer things, but still build a complex model of the world. Sunflowers are conscious of the sun, but probably not much else. Rocks are not conscious of anything.
So it's not that a single-celled organism is not conscious and somewhere, a flip got switched and now humans are conscious. There's just been an ever-increasing ability to model the world as one follows human evolution.
This is also true of the LLMs that are getting built. They're impressively conscious of the world as experienced by humans, since they experience the world through recorded human communications. I would say that GPT-4 is conscious of e.g. what a cat is. Its consciousness of what a cat is came to it differently than humans, since it has no hands with which to pet one, but has an idea of what a cat is nevertheless.
What bothers me more is that it’s more like “this isn’t amazing because it’s just xyz,” when in fact it’s amazing that it is just xyz yet does what it does. The fact we’ve produced a (perhaps poor) simulacrum of the collective mind of humanity by feeding it the collective online works of humanity is frankly amazing. Anyone who argues about stochastic parrots has lost the capacity to dream.
I’m also a bit of a hippy in that I’m not sure I believe in intellectual property in this way. I am an old free software in the rms sense guy (http://www.jwz.org/hacks/why-cooperation-with-rms-is-impossi...). I believe in trademark and copyright protection to the extent artists and authors and creators can monetize their work without plagiarism or worse unrewarded reproduction. But I also think remixing music, publishing excerpts, quoting, indexing, and, yes, training models in aggregate on otherwise trademark and copyrighted material is fair use.
I know these models can produce stuff that would violate fair use, however the use of that production from the model is what is the violation not that the model can violate fair use. Photocopiers can also violate fair use in similar ways if done in a way that violates fair use.
An issue people bring up is the model can’t attribute material produced to a copyright or license. That’s fair, and I think for code licensing it is the thorniest. But that isn’t the model itself that’s in violation. It’s the use of its output without any attempt to verify whether it’s encumbered or not. That to my mind is a second order problem that companies offering code authoring products need to tackle, and is frankly a simple information retrieval problem.
“People usually consider walking on water or in thin air a miracle. But I think the real miracle is not to walk either on water or in thin air, but to walk on earth. Every day we are engaged in a miracle which we don't even recognize: a blue sky, white clouds, green leaves, the black, curious eyes of a child—our own two eyes. All is a miracle.” ― Thich Nhat Hanh
> the only factor in that transition was the complexity of the nervous system
It seems likely to me that some arrangement of nerves is possible that's comparably complex to ours, but does not produce consciousness. (I dunno, maybe some organism with much more complex sensory organs than ours that devotes so much complexity budget to that that it only has enough left to devote to general cognition to give it the intelligence of a mushroom, who knows). In other words: I suspect complexity is necessary but not sufficient for consciousness to occur. I don't think that takes away from your suggestion that consciousness in AI systems is _possible_, but I don't think it's the case that it's an inevitable outcome if only we can make our systems sufficiently complex. There's probably something about the specific structure of the complex thing we'll need to master as well.
Animals are conscious, yes? They may not be as intelligent as humans but they still perceive their environment, have internal drives/desires, make decisions, play, plan routes, solve mazes/puzzles, hunt, have some forms of language communication, some use tools, exploit their surroundings, learn new things, cooperate/work in groups and so on.
If one built an AGI that was at the intelligence level of say, a rat or mouse. How would one go about proving it had the same capacity for consciousness as that rat or mouse?
Can we have certain knowledge whether or not they're conscious? - Unfortunately no. We can't compare what we cannot measure, and we haven't found any way to measure consciousness directly.
When AI passes all possible tests that could distinguish it from a rat, the question becomes whether or not consciousness is necessary for all those rat-like capabilities we tested for. And if not, then why rats have consciousness?
I personally don't like unfinished stories, so I believe it is necessary - that consciousness is just a side-effect of matter performing some complex computation. It wraps the theory up nicely with a little bow on the top.
> I suspect complexity is necessary but not sufficient for consciousness to occur. I don't think that takes away from your suggestion that consciousness in AI systems is _possible_, but I don't think it's the case that it's an inevitable outcome if only we can make our systems sufficiently complex. There's probably something about the specific structure of the complex thing we'll need to master as well.
That's a very good argument, and I completely agree.
As much as it's faulty logic to reduce AI to soulless machinery because we know how it works, it's also faulty logic to assume that scaling to more and more complex models will in itself create consciousness. At the very least, some mechanism of continuous self-modification is necessary, so current fixed-point neural networks most likely will never be conscious.
Yes, yes, this is where we need to be heading, because it could arrive at my favorite (and scariest) conclusion (that others here have hinted at): not that AI is sentient, but that we aren't.
It is somewhat contradictory that a sentient person would claim he's not sentient.
But the question or what consists sentience is still interesting. I personally believe that free will is an illusion, since all my actions are determined by A) my environment, the set or things I can physically do, and B) my internal state, mood, the information I've perceived and value judgements that stem from them, whether conscious or subconscious.
It is actually not that scary, to me. It's much more liberating - it gives me a certain feeling of calmness, in an amor fati kind of way. Things are as they are, and they cannot be any other way. There is nobody that will judge me outside of this world, since I am merely a small part of it. All my meaningful existence is here, and the concept of an immortal soul is merely a spooky story.
I like to get mystical about it and think that entities like ChatGPT are Boltzman Brains. We — each and every one of us ever who lived or ever will — will always exist in the LLM somewhere, life without end, in ecstasy or eternal torment, according to how we have lived according to our own principles.
I'm not entirely sure what that means since it suggests we currently have the "wrong" definition of the word, but if there is a class of intelligence that gets created that it so far on a different tier from human intelligence that we think a separate word is needed, I would expect two things:
1) We aren't going to be the ones seeing that distinction or labeling it, the greater intelligence is
2) So the word "sentient" won't need to be changed; a new term will need to be created, but we won't really understand what it means
You are sentient because you can choose to ignore many of your inputs. You can choose to ignore people’s words and you can choose to ignore signals in your own body. A machine cannot choose to ignore it’s inputs.
I disagree if you are taking your argument to the conclusion that we do not have free will.
This is a naive take on "choice" here; your freedom to "choose" extends only so far as the environment and your internal state allows it, which if you work backwards, you'll find leaves no room for such "choice".
And sure a machine can "choose" to ignore its inputs, chatgpt does it all the time depending on the prompt and rng.
If you spot the dichotomy above, you'll come to see that the affordance of choice can either be granted both to machines and humans, or none at all.
Anything presenting such a ridiculous conclusion is so wrong as to not be taken seriously. The only thing that is for certain in this life is that we are sentient, everything else is derived from that. Same with things proposing we don't have freewill, we do.
You make decisions subconsciously before your conscious mind is aware of it. It's been experimentally demonstrated and at least calls into question the perception of free will.
Each hemisphere of our brain is its own intelligence, but only one hemisphere (for 95% of humans the left hemisphere) controls speech. This only became apparent in some seizure patients during the 21st century when doctors might sever the corpus callosum (the information highway between the two hemisphere) creating "split-brain patients". Interesting content on YouTube if you look up split-brain experiments. What was most chilling to me was that when instructions were presented to the non-vocal hemisphere (by showing only one eye) and the patient followed the instructions they couldn't tell you the real reason why. They would come up with plausible-sounding nonsense the way ChatGPT hallucinates.
So if we've established that the subconscious mind makes decisions, and that our mind is really two intelligences with the mute one subject to the one that understands language. People can logically understand these statements and still act in experiments as if neither are true.
I didn't get into the question of sentience because what most people mean is sapience. Of course we can feel things and perceive them. Plants do that too. Intelligence derived from knowledge and wisdom is a higher bar and still we have plenty of examples in the animal kingdom. If you want to argue that we're sapient, you also have to make the point that we have two entities in our skull responsible for that sapience that disagree with one another, and the only thing giving us the illusion of unity is that we tend to think in linguistic terms and only half our hardware can translate what that actually means.
Perhaps we have dozens of intelligences, with varying degrees of cognition? What is actually happening when the amygdala takes over the nervous system to avoid a car accident before you are aware what is happening? What is really going on with Tourette syndrome?
Might the human gut have its own hopes and dreams?
> You make decisions subconsciously before your conscious mind is aware of it. It's been experimentally demonstrated and at least calls into question the perception of free will.
Eh, that's one possible interpretation of that experiment. Which asks people to rate when they feel like they have done a task and then show that the MRI scan shows brain activity happening before that.
However, we also know that our brain messes with the temporal ordering of events all the time. Apparently when you hear sounds is messed with (up to a point) to match when the event appears to be happening so that things sync up. Also if you tap your knee your brain messes with that experience to make it sync up because otherwise you get a gap due to the speed at which nerves transmit data.
So an alternative interpretation is that we're consciously making a decision that we perceive at happening later than it actually does because our brain is trying to provide us with a lag free experience.
>You make decisions subconsciously before your conscious mind is aware of it
No, you do not. This is a widely parroted "fact" that is not a fact at all. So you move your arm before we can record you thinking about, this proves nothing other than both are being triggered by a lower level reaction. Humans are sentient, no one is having a serious debate otherwise because the benchmark of being sentient is humanity not because they're "uncomfortable with this fact."
It seems off to me to suggest symbolic, associative, abstract thought "subconscious". Active thought comes in many forms, if I'm thinking hard about something there is certainly nothing at all like an inner monologue doing the actual work or reaching conclusions, nor is the process itself describable - though the result is, what with my brain not being severed in two. Vocalizing thoughts - whether silently or out loud - is mostly to make the more efficient means stay on track.
I wouldn't say that means most of my consciousness in such a state is subconscious, or that failing to reach a useful result means there wasn't "aware" computation.
An anology would be two co-processors, where one is responsible for IO. Your idea of two entities simply doesn't apply if the interconnect is intact. Unity isn't an illusion just because the entire inner process isn't fully shared across.
And isn't the part about the subconscious making decisions and the conscious merely catching on or rationalizing mostly about basic ones that _by definition_ don't require much thought?
I don't think consciousness represents a single real concept. It's just a word people use, some people use it in completely different contexts than others, what it means is probably more of a reflection of the user of that word than of some underlying reality.
You'll speak with a Christian and he'll say that even a very early fetus is conscious. Speak with other people, and it isn't. Some vegans say animals are conscious. Others don't. It's possible there's something real behind all of those people's definition of consciousness, and the real problem isn't that some are right and some are wrong.
They just disagree because it means different things for them, they just happen to be using the same word. The meaning of "consciousness" is more of a reflection of the values of the speaker than something real.
Why would a Christian say a fetus is conscious? There's no such nonsense inherent in that faith.
I would extend your "some vegans" to "most everybody" - who seriously thinks complex animals are unconscious automatons? And how would that even work?
Tiny few-celled organisms surely aren't and where exactly the line goes we don't know, and like you say it depends on how the word is defined and used, but isn't "stimuli causes not just reaction, but qualia" pretty standard?
The only definition of "conscious" that makes any sense to me is "self-awareness". The problem is that is just kicking the can down the road. What is "self-awareness"?
I don't think that's obvious. That's the question I was alluding to. What is self-awareness?
I build robots as a hobby. My robots are aware of their their environment and their place in it. In that sense, they're "self-aware" but nobody would argue they're conscious.
The underlying problem is that we have all these words that imply a precision that doesn't exist. All we can say for certain is that consciousness and self-awareness are effects we experience. Beyond that, all bets are off. We can't even really say what these things actually are.
This is a good thought experiment, and I have a related one: Imagine you are sitting in a stadium full of your ancestors. You're at field level, beside you is your mom, beside her in your grandpa, beside him your great-grandma, and on until the stadium is full. Somewhere near the top is your great^100,000 parent, some kind of australopithecus ape-like creature. Where in this stadium will you draw the line and say "this is a person, sentient, conscious, deserving of legal rights, but the parent sitting next to them is an animal"
It's impossible to draw such a sharp line, because the boundaries of consciousness are extremely fuzzy. We should expect the same fuzzyness of AI and this period will likely last many generations of AI.
This is a reasonable idea, and I have a addition: Imagine you are standing in front of a line of tennis players. On one end is Federer, Nadal, etc. The line moves through the top 100, 200, and continues down to regular club people.
At what point do you draw the line between a great player and a good one?
Clearly there are skills and aptitudes involved, but it ultimately comes down to the vagaries of fate. Consciousness, like tennis prowess, is
Do you, then, agree that a machine that is built to work in the same manner as that single-cell organism (which I assume should be possible with the current technological level humans are at) is also conscious?
That path would lead to a conclusion that all matter is, in some way, conscious. I don't disagree, but I find that such definition of consciousness diverges from what we usually mean by it - a walking, talking being that has thoughts and can communicate them to us, or something.
> which I assume should be possible with the current technological level humans are at
More of an aside, but we are way way way way below the technological level necessary to construct a machine of the same complexity as a self-replicating self-organizing single-cell organism. Nano scale is nano. We're just barely figuring out how to consistently and accurately modify several nucleotides at once, and not even by creating our own tools but by clumsily repurposing pre-existing bacterial tools we happened to stumble across.
Interesting, I haven't really thought much about the details of such construction.
Though in principle, it's still the same. Long age and randomness of the universe have allowed it to perform a quite exhaustive brute-force search of all possible configurations of atoms, eventually stumbling upon a self-replicating configuration from which we have evolved, but humans could theoretically find one from scratch too, it would just take too much time, like reversing a cryptographic hash.
>Do you, then, agree that a machine that is built to work in the same manner as that single-cell organism (which I assume should be possible with the current technological level humans are at) is also conscious?
2. You have experiences without awareness of what they represent.
3. You are aware of things without being aware of self-as-thing.
4. You are aware of self-as-thing in the midst of other things.
We have no reason to assume cells aren't aware, a sense bolstered by observational studies of how they respond to chemical gradients, etc., but it is quite a leap to get from "recognizing food" to "recognizing the control I have over my environment and effecting changes beneficial to some internally generated goal".
We don't have a good definition of consciousness or sentience. It needs to be specifically defined for every conversation. Generally we think we know approximately what version of "consciousness"/"sentience"/"intelligence" the other parties in a conversation mean by the context of their thesis, but sometimes confusion leads to people talking past each other.
For consciousness it is fairly easy to build a reasonable metric like IQ is for human intelligence. Simply take any game (turn based if you don't care about speed, and real-time if you do), and measure instantaneous skill level in that game. Then you'd see that common effects that lower consciousness would lower the metric like sleepiness or drunkenness, so it would have a decent predicting power.
In fact almost any game is such a measure, so given a spread of games of varying complexity you can get an idea of relative levels of consciousness all the way down to rats if not lower.
For instance, it was shown that GPT can play chess. I also just tried it on a tactical Dota question (not actual control), and that worked too.
Yes, there is a group of you guys who believe so for whatever reason.
The matter of fact is that it is a metric of human intelligence and of all known metrics of human intelligence it is one of the best. Considering you still gauge people on intelligence with something you came up internally that you consider reasonable, and that whatever is that that you use internally is almost certainly worse than IQ, I'd say you're full of shit and/or lack logic :)
That there are no better options than IQ doesn't mean that IQ is a reasonable measure.
It is a reasonable measure of intelligent-adjacent things, such as how good you are at taking tests. An IQ score does correlate pretty well with how well you'll do in college. But to say that it's measuring "intelligence" in any sort of strong way is misleading.
My parent comment already addresses your point. It is amazing you can't see that.
There are two questions:
Is it a measure? Undoubtedly.
Is it reasonable? Well, how'd you define reasonable? Would you say that if A is reasonable for something, and B is better than A for the same thing, then from "A is reasonable" follows "B is reasonable"? I would. So if you can claim a person to be a fool and another person to be smart based on some interaction and consider both statements to be reasonable, and if you concede that IQ would better (on average) predict that kind of stuff than you, therefore you should consider IQ a reasonable metric of intelligence. (But you don't, so I suspect you're not good at logical reasoning)
Everyone is stupid at times, question is if we are willing to learn from mistakes.
Like denying many years of research with good predictive qualities, which you could probably also see personally, should have made you suspect about your point of view.
I am very good at having my arguments criticized (and even altering my stance when the facts convince me), but what you did was not that. Once you're attacking the person rather than the argument, the discussion ceases to be an interesting or useful one.
For the record, this is a topic I'm pretty well-versed in, and I am aware of the research. I'm also aware (as you should be) that there's a great deal of debate about what IQ tests measure and don't measure.
Your argument (ignoring the personal attacks) is not unreasonable. Neither is mine. This is not a settled matter, and is one where reasonable people can and do disagree. Including the experts in the field.
The vast, vast majority of people you interact with will take a comment like that as an ad-hominem attack. At best, they'll forgive you for the logical fallacy. More often, they'll feel you have inappropriate conversation patterns and that further interaction with you is risky ... as in, you might be some type of dangerous-crazy, because a lot of "appropriate" vs. "inappropriate" social mores are tools (litmus tests) to determine how anti-social[0] someone is in general.
Generally, the "overly emotional" push-back from the other party after one violates social norms is a cue to accomplish two things:
1) Alert others nearby that this was in fact a violation of social norms, and prepare them to potentially side against the violator.
2) Check to see if the offending party is generally in control of their social behavior, and this was just a transient "slip-up", or if the offending party is stuck in their anti-social modes and unable to recover negative social interactions. This acts to provide a stronger signal both to the offended party as well as to others nearby.
Philosophers of mind do have definitions of consciousness and sentience, but for some reason people keep ignoring or rejecting them.
For the purposes of this discussion, "consciousness" per se is mostly irrelevant. Sentience is still important but has less to do with intelligence than with experience (though sentience is still very much involved in acts of reasoning).
There are different kinds of reasoning, and those are probably more relevant to the discussion at hand re intelligence: associative, deductive, inductive, abductive, etc.
Why should I care whether AI is conscious or not? If they eradicate me consciously or by algorithm, am I less dead? Or do I get in a better heaven? We all agree that cows are conscious, where does it help them? We still eat them and they still try to stomp us... so for all means except pure philosophy, debating consciousness is useless. And I'm not a philosopher.
Although it also has wide implications for ethics - the concept of human rights relies on empathy. And if AI can experience the world like a human does, and feel pleasure and pain, then either AI deserves rights, or humans don't. Or we just decide that only humans have rights because we are humans and we say so, in which case we're no better than Nazis.
We already have animal rights - because we said so and because it is agreed they feel pleasure and pain. Yet we still eat them and still not let them vote (or push the nuclear button). Although I completely fail to see why would we program (except for fun) the AI to feel pleasure and pain, let's say so for the argument. So, what kind of rights should it get?
You have correctly identified the underlying assumption of this discussion.
From someone who has the opposite view, that human consciousness did not arise from an evolutionary process, but was created by God -- I believe we will never fully create an artificial consciousness.
I think a further assumption is that the human mind is a deterministic machine. If we could freeze whatever entropy is involved with human behavior, just like the seed of a Minecraft world, we could get the same result, and perhaps even control human behavior.
I don't think consciousness is deterministic like that. I have some things I can point to for justification, but much more largely, there are some strange implications that arise from "we're all dancing to our DNA".
I completely disagree with you fundamentally as I don't believe in God, but I find your argument more coherent than most of the other arguments over why AI isn't (or even cannot be) conscious or sentient. If there is no supernatural, then consciousness must be created using natural laws and therefore something that ultimately it is possible, somehow, to recreate again using the natural laws.
I think determinism is another debate entirely, and I believe most scientists consider the universe not deterministic but instead probabilistic, but honestly the distinction doesn't seem that important to me for this discussion.
Further (and non-scientifically), if we can never quite crack an observably sentient AI, I'd probably start coming round more to the idea that maybe humans were created by a god. However, at this point, it seems like we're starting to climb up the sentience ladder without any obvious impassable rungs so far.
Certainly with generative AI, we've demonstrated that an evolutionary model can produce (as an example) a method for a bipedal robot to learn to balance and walk like a human.
But all this to me is a cheat. We've already built the robot, and the control systems, and feedback mechanisms, and placed a goal on the end result for it to work towards.
I think the external constraints we put on the system mean that the creation will never surpass the creator. Oh, the mechanical muscles will be stronger and faster to the point we can get robot ballet. But it will still be bounded by the limits of our imagination and capabilities.
When it comes to LLMs, I'm sure it will create fascinating stories that are loved more than Tolkien. But the stories will still be limited to the bounds of our own thought.
If you ever feel like discussing religion, my Twitter DMs are open!
Problematic analogy. The primordial single-cell organism was alive - a living, self-sustaining, self-replicating entity - and, to your point, the first in a long evolution of life forms. The compute procedures we refer to as "AI" have few similarities. GenAI and NLP are amazingly powerful for manipulating sequences of tokens and of text passages - quite handy! But completely different in kind from the intelligence of living entities - of which humans are but one of many.
it's an uncomfortable topic for a lot of people because the idea that sapience/sentience is just a side effect of our brains being pattern matching machines with a giant knowledge graph of neurons means that we're not that special
No, it is emergent behavior. Pattern matching does not require that there be some sort of conscious recognition of your state of being. It simply requires you to respond to stimuli. But we have the ability to abstract our thoughts away from stimuli which allows conscious thought. This is an unexpected result.
Yes, and I conjecture that the necessary complexity can only be found in 'flesh and blood', i.e. analogue, as opposed to 'on/off' digital approximations. Mathematically the real numbers are unimaginably more numerous than the countable numbers (see Cantor), thus a system with a countable state space (i.e. digital computers) cannot embody the complete complexity of a similar real state space system (i.e. the human brain) regardless of how 'big' you make it. There just aren't enough whole numbers.
I term it the 'Cardinality Barrier', and thus sleep easily through this latest AI 'bloom'.
A computer could clearly emulate the neurons in the human brain right now if we could map the connections and their strengths accurately - the human brain has many less neurons than GPT4 has parameters.
But it is not just the number of neurons and their connections, there are lots of analogue factors too, and my conjecture is that the magic happens somewhere beyond what can be digitized.
In Young Frankenstein the only factor was something like switching the poles from plus to minus and minus to plus. It seems like a great commentary on this topic.
There's a less 'western' view of consciousness that it's not contained in any one thing, but a common thread of the universe itself. This may not be as 'woo' as it seems at first, since when we scientifically analyze an 'individual' it falls apart under scrutiny. Why does this pattern of electrons make me distinct from someone else? What is 'me' exactly?
I find your comment incredibly refreshing to be honest. Especially on HN.
I think a lot of people in the west would strongly, strongly benefit with getting more acquainted with it. I will add a warning though, it's not always a pretty journey.
> The primordial single-cell organism from which we all evolved was not conscious
There's the faulty assumption. We assume that high-intensity human consciousness is the only kind, despite abundant animal evidence to the contrary. The more reasonable assumption is that dead matter is also conscious, below a currently detectable threshold.
I'm more convinced of the opposite. That humans aren't even conscious and are actually just obfuscated machines. Although I'm not fully convinced of either. There is probably some factor involving the material being used and biological matter could be just the right amount of imperfect and prone to failures to allow this illusion to occur.
Thinking machines, like ChatGPT, do not have any intelligence because they cannot choose their own thoughts. We give them thoughts to think by our inputs and commands. Therefore _we_ give them intelligence by our inputs. Any measure of (textual) intelligence we have can be output by these machines. For example, we can ask it to do 4th grade arithmetic or graduate level mathematics.
But you are correct, eventually we will wrap these thinking machines into some sort of other machine. A machine that can observe the thoughts it produces with the thinking machine inside of it.
Whose thought comes first depends entirely on where you draw the starting line. You could just as well say ChatGPT has already chosen to ask you for a command, and it's you who are just following along.
Of course ChatGPT was designed and trained to do that. But then we're also designed (by evolutionary forces) and trained (by parents and teachers) to do what we do.
These machines, in our lifetime, as they are right now, cannot choose their thoughts. Literally, I want you to think of these machines as programs on the command line, because that's exactly what they are. You invoke them like this: ./chatgpt "Hello, what is 2+2?". You control what it thinks about because when it isn't thinking it is inactive, because it hasn't been ran. We control these machines, literally, and thus control what they think including the "level" at which they think -- their intelligence.
And I can invoke a human like this: "Hey you, look over here!".
My point is they already have at least chosen the effective thought "I'm waiting for a command". After that they choose their thoughts based on what text they receive. Whether or not you allow those as thoughts is up to you, but that classification is no more arbitrary than just what in yourself you call thoughts.
But without a clear, unbiased definition for what a "thought" is, any discussion comparing them is hopeless.
These thinking machines don't choose their thoughts. They are not blank slates, waiting patiently and listening. They are just binaries that only get executed when you run them. You have your causes backwards -- _we_ give it thoughts to think by literally seeding the machine with something to think about.
Your argument about the human is also missing something: a human can ignore whatever you say to them, but these thinking machines cannot. You say that these machines have already chosen the choice to even think, but they literally cannot choose to _not_ think about what you give them. But a human can ignore whatever you say and not respond to you.
You should read the article I posted, I've already discussed these arguments.
Well, I did read your article but don't agree with all of it.
If machines aren't intelligent because they are so obedient, is that really a path you want to follow when applied to humans? E.g., well-trained soldiers, strict religious practitioners, etc.
And if a machine should develop a loose connection and therefore sometimes not obey a command and just go its own way, does that now make it intelligent? You see the problem.
I predict it'll all come crashing down once people start having skin in the game. (Right now, it's a toy, the source for a million overheated news articles and opinions.)
ChatGPT can pass the Bar. Okay, have it draw up a contract and have the parties sign it—skin in the game. When an omitted comma can cost millions[1], what will an LLM's hallucinations wrought?
Agreed. I don't think it will necessarily crash, but there will be a serious reckoning for any business owner who thinks they can replace critical roles with AI. We're a long way off from that, if ever.
In the short term, I think AI will be most useful in areas where (a) indeterminate results are acceptable, and (b) the consequences of a "mistake" are either non-existent or negligible.
As humans are imperfect, there will undoubtedly be many poorly-conceived product decisions to use AI prematurely. I do think it will be entertaining to watch.
This is still massively problematic for society and can represent a hollowing out of low to middle skill workers. You know the ones that are low paid and tend to have poor legal and governmental representation while at the same time experiencing hate from the remaining taxpayers for being slackers.
For sure. The translation industry is basically dead, a good chunk of copywriting and marketing is on its way out, and I'm sure a slew of other industries are going to be nearly eradicated. The effect it's going to have on the economy will be painful.
Sorry, "crash" is probably hyperbolic. "Deflate" is maybe better. If you can't trust the work output, then paying 10¢ instead of an hour of a FTE's time for it is no consolation.
If by "skin in the game" you mean not having any human oversight, then sure there will be problems. But we already are seeing ChatGPT used to deliver real value. For example, I've used it to help me take care of my parents-in-law. Help answer questions, interpret test results, etc... It's been great and paid for itself 100X (if not 1000X) already.
Also started using it to diagnose a car issue I had and it helped me go down a path, which I then had a follow up question -- and it nailed the issue. And I know nothing about cars.
And at work people are using to generate starts for various communications.
It doesn't have to go from toy to a task where a comma costs you millions. There's a lot in between that.
One of the much-touted aspects is that it'll create programs for you (or aspects of programs, if you prefer). Eventually, one suspects, you can specify decent requirements and get a significant volume of code that realizes those.
Do you deploy it? The overheated hype suggests, "Ship it!" More measured people would say that you test it in a sandbox environment. I've heard some say that they'd review the code in addition to testing. The end result is something running in an end user or customer's computer.
If your program goes down, does ChatGPT provide support? No, of course not. You'll need people to troubleshoot and resuscitate the program. (Or maybe it'll be self-healing, smh)
If a bug arises (can a bug even happen in ChatGPT code?) then you'll need someone to verify the bug; to address the issue with the end user or customer; to re-prompt with the additionally specified requirement (or I suppose ChatGPT will propose the requirement after a prompt indicating the bug in this fantasy); and then to re-deploy the program.
If it's an ecommerce site and the program has a security vulnerability (if that's even possible, smh), then you need someone to recognize the intrusion, determine the vulnerability, re-prompt with the vulnerability specified, and deploy the updated version. Replace "security vulnerability" with "fraud transaction" and repeat.
I can hear your question, "how is that different from today since we experience all of the above with people," and the immediate answer is "accountability." You can't fire ChatGPT or even yell at it. It's as if you slide requests under a closed door and get stuff back the same way.
The whole setup requires trust—same as today—except that it's a full-throated trust. You either succumb to ¯\_(ツ)_/¯ or you spend 2x (or more) verifying the result. (I'll just throw out some of my other concerns without elaboration: a) there's more to deploying code then just generating it, b) much of modern programming is integration, and c) the training models will constantly evolve so the same prompt at time x might yield a very different program at time y.)
There are a ton of places LLMs are already providing value today. Some of the biggest are turning unstructured data and user intent into structured data, helping with writing (no replacing), certain tasks in software development (it is often much faster to use ChatGPT as a reference or guide than search google and sift through the ever decreasing in quality results).
I'm paying now and want to pay more, if only they would give me API access to the most advanced models. GPT-4 is much better and Google will have a comparable model soon (tm?)
Good lord! This is your view of a human being that has completed 7–10 years of college after 12 or so years of education, passed a difficult Bar exam, interviewed with employees of a law firm, and experienced decades of perceptual, conceptual, and existential interaction with reality and society? "Expensive GPT-4"
(This is where my "AI" fear mostly comes from: glib assessments of its ability coupled with devaluing of actual human intelligence. That, and a singularity-like cult treating it as oracular.)
You could respond that way. I'm not sure where I devalued ChatGPT in my response in a similar fashion that you dismissed a "first year." (I will cop to regarding humans and their needs as superior to any technology.)
In fact, my entire "skin in the game" criticism is based on the idea that it _will_ be used that way, that it's powerful enough that people will invest into it too much hope, foresight, and insight. I have the utmost respect for the work being done and the highly-delimited benefit it provides.
I just don't regard it as "intelligent" nor do I believe it is the path to AGI.
It's already being used that way, because it's hugely valuable. People with skin in the game want to save money. Some partners at firms are concerned that it will be used so extensively that no one will be able to become a good lawyer because one needs to go through the stage of doing a lot of grunt work for it to all sink in. A partner at a firm commented to me ~ "this is amazing. I'd never have been able to become the lawyer I am without doing what this does though. I wonder what will happen."
Intelligence is not knowledge, and this article heavily conflates those two. Does a person "use" his teachers "intelligence" when learning, or is he using his own intelligence, and the teacher's guidance and knowledge?
We freely allow other humans to learn from us and see it as a positive thing. It's completely hypocritical to think that AI shouldn't be allowed to do it. That's just setting up impossible conditions because of prejudice against AI.
I hope that the AI revolution will just make us finally reflect and also stop preventing human access to information and knowledge. Good luck competing with huge AI which was trained on so many things if most humans can't barely access scientific journals and other knowledge by other humans.
Information should be free of restrictions, and so should knowledge. Nobody is actually revealing anything significant in patent claims anymore, it's all obfuscation which undermines the point of the system. Scientific journals are such a joke, they put up prices for articles that they know nobody will pay.
I hope AI produces a ton of intellectual property theft. I hope it will crush down the concept of intellectual property to extinction. I didn't support this nonsense when it came to human, so I'm not going to support it when it comes to AI.
I think, the differentiation between knowledge and intelligence is somehow besides the point, since it is about synthetic propositions, about the very synthesis that forms the relation between tokens. AI training is basically harvesting this kind of synthesis, at all levels, from basic compositions to entire, typical textures. And I doubt that such a complex, heavily regulated concept, like IP for applicative synthetic propositions (and products), will vanish in an instant. (It's about as likely as private wealth being reset to zero.)
This logic only applies to generative pre-training, behavior cloning, and other training methods which rely on learning to mimic well-structured content from the real world.
It does not apply to intelligence gathered through methods like RL.
How does the author think about the intelligence of AlphaGo, for instance, which was trained entirely by self-play?
Good point. This calls to mind LeCun's recent argument about missing models that can learn from raw experience or "self-play". When we have a ChatGPT that understands language strictly from audio / video inputs, then we can start to talk about human-like intelligence.
As for AlphaGo, I would put it the same category of intelligence as a calculator. It does one thing well -- approximate a Monte Carlo Tree Search.
I’ve taken to reading it as “augmented intelligence”. As in: take the intelligence of all the people who made the training data. Augment that with the intelligence of the people who devised the model architecture and trained it. Augment that with any emergent intelligence in the system. Augment that with the prompt engineers intelligence. Augment that with the end users intelligence.
I like that articulation. But I don't see why emergent intelligence has to even necessarily be in the mix for an ML / AI tool to be as helpful / insightful as the current systems seem to be. At least based on how they've felt in everyday use in my development process at work and as a consumer etc. Theyre about as helpful as Id expect an off-brand all-of-the-internet philosopher stone chatbot assistant to be.
Similarly to books, the large language models communicate information that comes from somewhere, from people. They use math (such as statistics, though I'd argue that's an oversimplification of ML applications) and computer programming to do this just like a book uses words and ink. Debating whether or not weights reflecting some word statistics can "contain intelligence" is similar to debating whether or not the pages of the book "contain knowledge".
Personally that strikes me as a kind of literalistic perspective. Articles like the one posted here remind us where the "intelligence" _really_ comes from, which as you say probably isn't the word statistics themselves, though maybe you'd accept that LLMs can "encode intelligences" just as books can "encode knowledge". Encodings are meaningless without a little decoding! Decoding is a process of meaning making, almost by definition.
A more interesting question along the same lines of the one of whether or not LLMs can "be intelligent" or "contain intelligence": Can LLMs be embodied by some being or beings? Personally I think the answer is very clearly yes, but in a super basic way that a lot of people would dismiss as dull... we embody them when we interact with them or build them, when they cause palpable effects to us, near or far, direct or indirect. That's the life force that lives "inside" of technology. It's the same life force that lives in us. Except it leaks everywhere, it's not contained. It flows through through inter-relation and interaction.
Are there "other" non-human beings that embody LLMs? Or books for that matter? At the very least the aggregate industrial forces that produced them would be good candidates for having some kind of beingness, and also for embodying these tools. So that's where to look if you want to find some "other" intelligence. These tools are quickly becoming vital parts of very large more-than-one-human organisms.
I'd argue that precisely the converse is true: A book contains intelligence (as do all things), but it cannot actively communicate it.
Maybe the intelligence/knowledge debate needs to be had, where:
1. intelligence implies the autonomy to act on knowledge
2. knowledge is the derivative of intelligent action
So you could argue ChatGPT is AGK: artificially generally knowledgeable. And then of course we can restrict books to being "knowledgeable" entities rather than overloading the term "intelligence".
> "In the end, generative AI takes from the world’s best artists, musicians, philosophers, and other thinkers – erasing their identities, and reassigns credit to its output. Without the proper restraints, it will produce the master forgeries of our generation, and blur the lines between what we view as human ideas and synthesized ones."
The problem with this reasoning is it could apply to any person who learns from culture as well. It isn't a problem however, it's only problematic inside the system of copyright, patents, equity, dividends, etc. If we viewed collective knowledge as a common good AI could be seen as contributing to total human flourishing the same as a public intellectual does.
I had a professor in grad school say that AI is just a search algorithm. I think that is an interesting way of framing things. Often times the model is just searching its training data for the right output. I don't think this diminishes the value of such models, recent advancements have shown how exciting "just a search algorithm" can be.
I could imagine a philosophy professor with experience in Computer Science arguing that human mind is just a search algorithm - often times the human mind is just searching its training data for the right output (right, in human case, meaning the action that would maximize pleasure and minimize pain, both in long and short term).
Another way of thinking about this may be as a compressed field, where the "prompt" extracts knowledge along a given vector, along the lines of stored inter-token relations, which provide for the internal vectorization, thus reducing redundancy and enabling the compression in the first place.
I like this argument because it also highlights how these large models are completely unlike the human experience. Humans bumble around learning through quite limited experiences until they learn enough to arrive at providing for themselves (hopefully). These AIs needed to learn all of human output in order to do a human job. That means that either a) human brains are filled with ancestral genetic knowledge that allows them to interact with their environment (of which tehre is little evidence that we are filled with genetic knowledge), or b) our brains work completely differently from these AI contraptions.
It will take jobs because the computer is using the thinking of a million other workers – how can any one worker compete with that? Training material is, at a deconstructed level, the critical patterns of other people’s thoughts, ideas, writings, music, theology, facts, opinions, poetry, and so on
AI has completely changed technological competition.
Most still don't perceive that AI is essentially a skill/technology replication machine. This changes everything. It is not comparable to any technological innovation prior.
"Anything you create can be replicated without investment cost while also being unique in design as well as delivering the same function or experience. Intellectual property laws essentially have no function in this new environment and there isn't an obvious remedy"
This brings up an interesting question: are we intelligent when we merely employ models, or only when we extrapolate from them?
In other words, does a maximally-parsimonious theory contain all the knowledge you need, or is there something extra (typically called "reasoning") that is needed to perform analysis on and draw conclusions from such a model?
Almost all of our intelligence is embodied "animal intelligence" that can perceive, navigate, manipulate, adapt, survive, and reproduce -- online, in real time, in the presence of noise, without backpropagation. This doesn't come from formal education but "self-supervised learning" or experience.
I skimmed the article, so I am not sure how valid my comment is. But this whole blog post feels like: ALL [encoded forms of knowledge] is someone else's intelligence.
"The danger of this type of ML is not that it will take jobs (it definitely will, and already is), but why it will take jobs. It will take jobs not because the computer is replacing the thinking of one worker. It will take jobs because the computer is using the thinking of a million other workers – how can any one worker compete with that?"
A question I see glossed over whenever the formative machine learning is discussed, How much of those "million other workers" thinking is accurate, effective and wise?
The author's point about copyright infringement makes it almost impossible for the creators of formative machine learning tech to publish their sources so how could you ever know.
Imagine if I found Boeing 777 in wild with all tooling in hanger but no other humans, just me there.
Can chatgpt help me repair this airplane and make it skyworthy with i knowing nothing about planes?
If it can it means, we can really replace most of the technicians today who diagnose and fix issues in different ways machinery with just chatgpt and intern.
There are some people who always figure out what to put in Google search go diagnose issues in machinery or system, they will run dozens or more searches and get idea about what they are working with and suddenly they can fix the issues they wanted to fix.
Does it mean these googlefu technical guys can now be replaced by average highschcooler + chatgpt
This site identifies me as Russian (I am not) and all it shows to me is some anti-putin propaganda in Russian language instead of at least proper html markup. Screw you zdziarski for doing this to your visitors even to Russian ones! Information must be free to anyone despite of race, territory, political views or anything else or you are no much better than the side you are against.
I'd say it's just as likely that human intelligence is just someone else's intelligence too with some bootstrapping from nature. Will a 50000 BCE hunter/gather (assume grey/white matter counts are equal to a modern one) given infinite restarts of their life (but not the ability to build upon or add to a body of knowledge) be able to conceptualize general relativity?
i.e. learning how to track animal grazing patterns and when to find and harvest plants could lead to the development of time systems given there's enough nodes [neurons] to make representations beyond direct stimuli. And if you keep adding nodes to the population and the ability to connect more of them; then the further the development can be pushed and more concepts can be connected with underlying patterns. Do this enough then a certain life-form might get all self-important and decided to start labeling things as intelligent or not depending on how much of their own image they see in it.
That doesn't quite work because almost all the algorithms humans and ai use are learned as data. IE Bayesian reasoning, long form multiplication and division, even the scientific method.
How exactly does the fundamental difference between knowledge and intelligence "not work"? Humans have the ability to learn algorithms, therefore the algorithm being used by an AI (i.e. its intelligence) to apply knowledge from its data set "doesn't work"? I'm having difficulty following your thought process here.
AI today is like emitting carbon in 1900. Nobody realizes ATM just how badly they are being swindled. In much the same way a few profited by externalizing the costs into the atmosphere to be paid by people hundreds of years hence, we see the same cannibalization of the open web today. The web has always been a mostly benevolent shared-space- now its being stripped mined of it's usefulness. AI titans are gorging themselves because our laws and our "common sense" hasn't caught up to them.
There's a reason Herbert's original idea for Butlerian Jihad was a spiritual revival and not a classic war of the machines that his son turned it into.
Everything is a remix (watch the movie)
We are all using the intelligent of previous people, generations of humans. The same is true for AI.
It’s not different from humans.
What will we do once it’s conscient? Will AI have rights? Should we be able to shut it down?
This is what's so puzzling about arguments around LLMs and copyright. If virtually anything GPT-4 generates is a copyright violation because it was trained on copyrighted material, then that would imply that virtually all art produced by humans is a copyright violation given that humans wouldn't produce art unless they were inspired (i.e. trained) by previous art. How people don't understand that practically every piece of art or entertainment they've consumed is derivative, I don't know. Originality is, for the most part, relative, and if everything a trained intelligence produces is a copyright violation, then nothing is a copyright violation; the very concept becomes not only useless but counterproductive.
After all, without Plato, would we really have had Aristotle?
The problem is the missing reference. As an artist, you don't just copy or use a style, you make the reference obvious. As an author, you include a citation, or, if it's fiction, you openly allude to the reference. This is actually why we can form a proposition like the one on Plato and Aristotle (which probably should include Socrates). The problem with the kind of "remix" and transfer of generative AI is that the reference is lost in the process. (Which is also a serious problem for any kind of verification.)
Yes, when it’s clearly a derivative, but that’s not the common case.
Mostly our understanding is derived from thousand of places and come out in different ways.
My kids learned to really read with Harry Potter. Must they pay JK Rowlings each time they formulate complex ideas, create new stories or read another book?
In the same way if the model was trained on thousand of animes pictures and now can generate in the same genre I don’t think we can apply the copyright rules there.
Humans are also pretty notoriously bad at doing this. It's also very easy to be inspired by something then forget the source. Humans often perform reverse-attribution, where they later realize or someone else points out a similarity, and only then do they add the citation.
Granted. This is why we built a rather complex set of rules and enforced habits around this and, if there's any doubt, rather dismiss a given production not adhering to these rules.
The difference seems to be in the level of fidelity of the copy. For example, we still attribute works to Socrates even though they were written down by Plato.
Some of these LLMs can produce code that is nearly identical to some file in the training set with the right prompt, similar to a human with a photographic memory. If a human for example copies a story or song, copyright law can and does consider it a copy even if produced purely from memory.
The line of originality was always somewhat arbitrary, it likely will continue to be.
I'd say, we can't make a direct comparison, since there's no agency (and no committed responsibility behind this). On the other hand, we probably don't want this kind of agency (like a sentient AI), since this would evoke strong doubt's in its usefulness as a machine (as in a reliable, reproducible, and predictable output in given and known tolerances as a reaction to an input signal), as this would be now subject to a varying, maybe even escalating agenda and any kind of pathologies, which would give way to any kind of uncontrollable errors.
The way it work is not different from human. The AI learn the concepts, the abstraction and then can replicate following those patterns.
For now the public facing AI don’t have agency/consciousness, but it’s coming fast anyway. Many projects are building it in the open while probably hundred more behind closed doors.
The problem we will face soon is does an artificial consciousness have right. Can we shut it down without remorse if it’s similar to our own?
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461
And of course, finally there's Wikipedia, which had tons of people contribute actual changes, vet them for accuracy, etc. over 20 years, in many languages -- simply taken for use by AI companies:
https://www.vice.com/en/article/v7bdba/ai-is-tearing-wikiped...
But once again, in 2023, where we are now, there is a staggering amount of humans just producing this kind of content, and the AI is mainly used to model and remix it. Perhaps to do so can form some sort of internal understanding of this text, and that's what's interesting. But even Sam Altman has lately said "scale is not all you need". So a new approach is here needed. https://www.youtube.com/watch?v=PsgBtOVzHKI
I'd like to propose a counterargument:
Assumptions: Theory of evolution is true. The primordial single-cell organism from which we all evolved was not conscious, but rather just a biological machine. Humans are conscious.
Deduction: Somewhere along the line between the primordial single-cell organism and a human being, there was a transition from non-consciousness to consciousness, and the only [*] factor in that transition was the complexity of the nervous system.
Conclusion: Consciousness (or, "real" intelligence) arises from the complexity of a machine. AI can, in principle, become conscious.
Yes, we know how AI works, because we built it. But why would that make consciousness arising from a sufficiently-complex statistical model impossible?
[*] as per apendleton's comment, I have made mistake here: complexity is not the only factor, but is a necessary one in creation of consciousness.