Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I feel that the big hidden implication that these kinds of articles are trying to say is "AI is not real intelligence", further implying something along the lines of "AI will never be conscious" (as it's hard to come up with another definition of "real" intelligence except the-human-kind).

I'd like to propose a counterargument:

Assumptions: Theory of evolution is true. The primordial single-cell organism from which we all evolved was not conscious, but rather just a biological machine. Humans are conscious.

Deduction: Somewhere along the line between the primordial single-cell organism and a human being, there was a transition from non-consciousness to consciousness, and the only [*] factor in that transition was the complexity of the nervous system.

Conclusion: Consciousness (or, "real" intelligence) arises from the complexity of a machine. AI can, in principle, become conscious.

Yes, we know how AI works, because we built it. But why would that make consciousness arising from a sufficiently-complex statistical model impossible?

[*] as per apendleton's comment, I have made mistake here: complexity is not the only factor, but is a necessary one in creation of consciousness.



Asking "Is X conscious" is the wrong question, and is responsible for endless arguments with people talking past each other. The correct question to ask is: "What is X conscious of?" or put differently, "How much of the universe is modeled by X?"

Then we can see that humans are conscious of many things. Cats are conscious of fewer things, but still build a complex model of the world. Sunflowers are conscious of the sun, but probably not much else. Rocks are not conscious of anything.

So it's not that a single-celled organism is not conscious and somewhere, a flip got switched and now humans are conscious. There's just been an ever-increasing ability to model the world as one follows human evolution.

This is also true of the LLMs that are getting built. They're impressively conscious of the world as experienced by humans, since they experience the world through recorded human communications. I would say that GPT-4 is conscious of e.g. what a cat is. Its consciousness of what a cat is came to it differently than humans, since it has no hands with which to pet one, but has an idea of what a cat is nevertheless.


What bothers me more is that it’s more like “this isn’t amazing because it’s just xyz,” when in fact it’s amazing that it is just xyz yet does what it does. The fact we’ve produced a (perhaps poor) simulacrum of the collective mind of humanity by feeding it the collective online works of humanity is frankly amazing. Anyone who argues about stochastic parrots has lost the capacity to dream.

I’m also a bit of a hippy in that I’m not sure I believe in intellectual property in this way. I am an old free software in the rms sense guy (http://www.jwz.org/hacks/why-cooperation-with-rms-is-impossi...). I believe in trademark and copyright protection to the extent artists and authors and creators can monetize their work without plagiarism or worse unrewarded reproduction. But I also think remixing music, publishing excerpts, quoting, indexing, and, yes, training models in aggregate on otherwise trademark and copyrighted material is fair use.

I know these models can produce stuff that would violate fair use, however the use of that production from the model is what is the violation not that the model can violate fair use. Photocopiers can also violate fair use in similar ways if done in a way that violates fair use.

An issue people bring up is the model can’t attribute material produced to a copyright or license. That’s fair, and I think for code licensing it is the thorniest. But that isn’t the model itself that’s in violation. It’s the use of its output without any attempt to verify whether it’s encumbered or not. That to my mind is a second order problem that companies offering code authoring products need to tackle, and is frankly a simple information retrieval problem.


“People usually consider walking on water or in thin air a miracle. But I think the real miracle is not to walk either on water or in thin air, but to walk on earth. Every day we are engaged in a miracle which we don't even recognize: a blue sky, white clouds, green leaves, the black, curious eyes of a child—our own two eyes. All is a miracle.” ― Thich Nhat Hanh


> the only factor in that transition was the complexity of the nervous system

It seems likely to me that some arrangement of nerves is possible that's comparably complex to ours, but does not produce consciousness. (I dunno, maybe some organism with much more complex sensory organs than ours that devotes so much complexity budget to that that it only has enough left to devote to general cognition to give it the intelligence of a mushroom, who knows). In other words: I suspect complexity is necessary but not sufficient for consciousness to occur. I don't think that takes away from your suggestion that consciousness in AI systems is _possible_, but I don't think it's the case that it's an inevitable outcome if only we can make our systems sufficiently complex. There's probably something about the specific structure of the complex thing we'll need to master as well.


Animals are conscious, yes? They may not be as intelligent as humans but they still perceive their environment, have internal drives/desires, make decisions, play, plan routes, solve mazes/puzzles, hunt, have some forms of language communication, some use tools, exploit their surroundings, learn new things, cooperate/work in groups and so on.

If one built an AGI that was at the intelligence level of say, a rat or mouse. How would one go about proving it had the same capacity for consciousness as that rat or mouse?


Can we have certain knowledge whether or not they're conscious? - Unfortunately no. We can't compare what we cannot measure, and we haven't found any way to measure consciousness directly.

When AI passes all possible tests that could distinguish it from a rat, the question becomes whether or not consciousness is necessary for all those rat-like capabilities we tested for. And if not, then why rats have consciousness?

I personally don't like unfinished stories, so I believe it is necessary - that consciousness is just a side-effect of matter performing some complex computation. It wraps the theory up nicely with a little bow on the top.



> I suspect complexity is necessary but not sufficient for consciousness to occur. I don't think that takes away from your suggestion that consciousness in AI systems is _possible_, but I don't think it's the case that it's an inevitable outcome if only we can make our systems sufficiently complex. There's probably something about the specific structure of the complex thing we'll need to master as well.

That's a very good argument, and I completely agree.

As much as it's faulty logic to reduce AI to soulless machinery because we know how it works, it's also faulty logic to assume that scaling to more and more complex models will in itself create consciousness. At the very least, some mechanism of continuous self-modification is necessary, so current fixed-point neural networks most likely will never be conscious.


Yes, yes, this is where we need to be heading, because it could arrive at my favorite (and scariest) conclusion (that others here have hinted at): not that AI is sentient, but that we aren't.


It is somewhat contradictory that a sentient person would claim he's not sentient.

But the question or what consists sentience is still interesting. I personally believe that free will is an illusion, since all my actions are determined by A) my environment, the set or things I can physically do, and B) my internal state, mood, the information I've perceived and value judgements that stem from them, whether conscious or subconscious.

It is actually not that scary, to me. It's much more liberating - it gives me a certain feeling of calmness, in an amor fati kind of way. Things are as they are, and they cannot be any other way. There is nobody that will judge me outside of this world, since I am merely a small part of it. All my meaningful existence is here, and the concept of an immortal soul is merely a spooky story.


I like to get mystical about it and think that entities like ChatGPT are Boltzman Brains. We — each and every one of us ever who lived or ever will — will always exist in the LLM somewhere, life without end, in ecstasy or eternal torment, according to how we have lived according to our own principles.


I'm not entirely sure what that means since it suggests we currently have the "wrong" definition of the word, but if there is a class of intelligence that gets created that it so far on a different tier from human intelligence that we think a separate word is needed, I would expect two things:

1) We aren't going to be the ones seeing that distinction or labeling it, the greater intelligence is

2) So the word "sentient" won't need to be changed; a new term will need to be created, but we won't really understand what it means


You are sentient because you can choose to ignore many of your inputs. You can choose to ignore people’s words and you can choose to ignore signals in your own body. A machine cannot choose to ignore it’s inputs.

I disagree if you are taking your argument to the conclusion that we do not have free will.


This is a naive take on "choice" here; your freedom to "choose" extends only so far as the environment and your internal state allows it, which if you work backwards, you'll find leaves no room for such "choice".

And sure a machine can "choose" to ignore its inputs, chatgpt does it all the time depending on the prompt and rng.

If you spot the dichotomy above, you'll come to see that the affordance of choice can either be granted both to machines and humans, or none at all.


Or, that you are the only one.


  I act like you act, I do what you do
  But I don’t know, what it’s like to be you
  What consciousness is, I ain’t got a clue
  I got the zombie blues
— David Chalmer


>not that AI is sentient, but that we aren't.

Anything presenting such a ridiculous conclusion is so wrong as to not be taken seriously. The only thing that is for certain in this life is that we are sentient, everything else is derived from that. Same with things proposing we don't have freewill, we do.


You make decisions subconsciously before your conscious mind is aware of it. It's been experimentally demonstrated and at least calls into question the perception of free will.

Each hemisphere of our brain is its own intelligence, but only one hemisphere (for 95% of humans the left hemisphere) controls speech. This only became apparent in some seizure patients during the 21st century when doctors might sever the corpus callosum (the information highway between the two hemisphere) creating "split-brain patients". Interesting content on YouTube if you look up split-brain experiments. What was most chilling to me was that when instructions were presented to the non-vocal hemisphere (by showing only one eye) and the patient followed the instructions they couldn't tell you the real reason why. They would come up with plausible-sounding nonsense the way ChatGPT hallucinates.

So if we've established that the subconscious mind makes decisions, and that our mind is really two intelligences with the mute one subject to the one that understands language. People can logically understand these statements and still act in experiments as if neither are true.

I didn't get into the question of sentience because what most people mean is sapience. Of course we can feel things and perceive them. Plants do that too. Intelligence derived from knowledge and wisdom is a higher bar and still we have plenty of examples in the animal kingdom. If you want to argue that we're sapient, you also have to make the point that we have two entities in our skull responsible for that sapience that disagree with one another, and the only thing giving us the illusion of unity is that we tend to think in linguistic terms and only half our hardware can translate what that actually means.


I think it might go beyond two intelligences. For instance, there are 500 million neurons that independently govern the gut - https://en.wikipedia.org/wiki/Enteric_nervous_system

For reference, a cat has ~200 million neurons in its brain - https://en.wikipedia.org/wiki/Cat_intelligence

Perhaps we have dozens of intelligences, with varying degrees of cognition? What is actually happening when the amygdala takes over the nervous system to avoid a car accident before you are aware what is happening? What is really going on with Tourette syndrome?

Might the human gut have its own hopes and dreams?


I think Michael Levin has the right idea with his "cognitive light cones" approach.


> You make decisions subconsciously before your conscious mind is aware of it. It's been experimentally demonstrated and at least calls into question the perception of free will.

Eh, that's one possible interpretation of that experiment. Which asks people to rate when they feel like they have done a task and then show that the MRI scan shows brain activity happening before that.

However, we also know that our brain messes with the temporal ordering of events all the time. Apparently when you hear sounds is messed with (up to a point) to match when the event appears to be happening so that things sync up. Also if you tap your knee your brain messes with that experience to make it sync up because otherwise you get a gap due to the speed at which nerves transmit data.

So an alternative interpretation is that we're consciously making a decision that we perceive at happening later than it actually does because our brain is trying to provide us with a lag free experience.


>You make decisions subconsciously before your conscious mind is aware of it

No, you do not. This is a widely parroted "fact" that is not a fact at all. So you move your arm before we can record you thinking about, this proves nothing other than both are being triggered by a lower level reaction. Humans are sentient, no one is having a serious debate otherwise because the benchmark of being sentient is humanity not because they're "uncomfortable with this fact."


It seems off to me to suggest symbolic, associative, abstract thought "subconscious". Active thought comes in many forms, if I'm thinking hard about something there is certainly nothing at all like an inner monologue doing the actual work or reaching conclusions, nor is the process itself describable - though the result is, what with my brain not being severed in two. Vocalizing thoughts - whether silently or out loud - is mostly to make the more efficient means stay on track.

I wouldn't say that means most of my consciousness in such a state is subconscious, or that failing to reach a useful result means there wasn't "aware" computation.

An anology would be two co-processors, where one is responsible for IO. Your idea of two entities simply doesn't apply if the interconnect is intact. Unity isn't an illusion just because the entire inner process isn't fully shared across.

And isn't the part about the subconscious making decisions and the conscious merely catching on or rationalizing mostly about basic ones that _by definition_ don't require much thought?


I don't think consciousness represents a single real concept. It's just a word people use, some people use it in completely different contexts than others, what it means is probably more of a reflection of the user of that word than of some underlying reality.

You'll speak with a Christian and he'll say that even a very early fetus is conscious. Speak with other people, and it isn't. Some vegans say animals are conscious. Others don't. It's possible there's something real behind all of those people's definition of consciousness, and the real problem isn't that some are right and some are wrong.

They just disagree because it means different things for them, they just happen to be using the same word. The meaning of "consciousness" is more of a reflection of the values of the speaker than something real.


Why would a Christian say a fetus is conscious? There's no such nonsense inherent in that faith.

I would extend your "some vegans" to "most everybody" - who seriously thinks complex animals are unconscious automatons? And how would that even work? Tiny few-celled organisms surely aren't and where exactly the line goes we don't know, and like you say it depends on how the word is defined and used, but isn't "stimuli causes not just reaction, but qualia" pretty standard?


The only definition of "conscious" that makes any sense to me is "self-awareness". The problem is that is just kicking the can down the road. What is "self-awareness"?


Obviously self-awareness means beieg conscious, isn't it?


I don't think that's obvious. That's the question I was alluding to. What is self-awareness?

I build robots as a hobby. My robots are aware of their their environment and their place in it. In that sense, they're "self-aware" but nobody would argue they're conscious.

The underlying problem is that we have all these words that imply a precision that doesn't exist. All we can say for certain is that consciousness and self-awareness are effects we experience. Beyond that, all bets are off. We can't even really say what these things actually are.


I get your premise but your examples are so loose that it hinders your point. Just my honest feedback.


This is a good thought experiment, and I have a related one: Imagine you are sitting in a stadium full of your ancestors. You're at field level, beside you is your mom, beside her in your grandpa, beside him your great-grandma, and on until the stadium is full. Somewhere near the top is your great^100,000 parent, some kind of australopithecus ape-like creature. Where in this stadium will you draw the line and say "this is a person, sentient, conscious, deserving of legal rights, but the parent sitting next to them is an animal"

It's impossible to draw such a sharp line, because the boundaries of consciousness are extremely fuzzy. We should expect the same fuzzyness of AI and this period will likely last many generations of AI.


This is a reasonable idea, and I have a addition: Imagine you are standing in front of a line of tennis players. On one end is Federer, Nadal, etc. The line moves through the top 100, 200, and continues down to regular club people.

At what point do you draw the line between a great player and a good one?

Clearly there are skills and aptitudes involved, but it ultimately comes down to the vagaries of fate. Consciousness, like tennis prowess, is

bestowed / earned / fought for / learned

as much as it is innate.


>The primordial single-cell organism from which we all evolved was not conscious, but rather just a biological machine.

I reject this assumption. We have no reason to assume cells aren't aware beings and that a sense of being isn't fundamental to at least all life.


Do you, then, agree that a machine that is built to work in the same manner as that single-cell organism (which I assume should be possible with the current technological level humans are at) is also conscious?

That path would lead to a conclusion that all matter is, in some way, conscious. I don't disagree, but I find that such definition of consciousness diverges from what we usually mean by it - a walking, talking being that has thoughts and can communicate them to us, or something.


> which I assume should be possible with the current technological level humans are at

More of an aside, but we are way way way way below the technological level necessary to construct a machine of the same complexity as a self-replicating self-organizing single-cell organism. Nano scale is nano. We're just barely figuring out how to consistently and accurately modify several nucleotides at once, and not even by creating our own tools but by clumsily repurposing pre-existing bacterial tools we happened to stumble across.


Interesting, I haven't really thought much about the details of such construction.

Though in principle, it's still the same. Long age and randomness of the universe have allowed it to perform a quite exhaustive brute-force search of all possible configurations of atoms, eventually stumbling upon a self-replicating configuration from which we have evolved, but humans could theoretically find one from scratch too, it would just take too much time, like reversing a cryptographic hash.


>Do you, then, agree that a machine that is built to work in the same manner as that single-cell organism (which I assume should be possible with the current technological level humans are at) is also conscious?

No, not without serious proof.


But where is the serious proof that single cell organisms are conscious?

(Can I even prove that I'm myself conscious?)


It is not too far from the usual meaning if you take it to be a number instead of boolean. E.g. humans ~100, rat ~1, single-cell ~1e-8, etc


Four options are available:

1. You have no experiences.

2. You have experiences without awareness of what they represent.

3. You are aware of things without being aware of self-as-thing.

4. You are aware of self-as-thing in the midst of other things.

We have no reason to assume cells aren't aware, a sense bolstered by observational studies of how they respond to chemical gradients, etc., but it is quite a leap to get from "recognizing food" to "recognizing the control I have over my environment and effecting changes beneficial to some internally generated goal".


We don't have a good definition of consciousness or sentience. It needs to be specifically defined for every conversation. Generally we think we know approximately what version of "consciousness"/"sentience"/"intelligence" the other parties in a conversation mean by the context of their thesis, but sometimes confusion leads to people talking past each other.


For consciousness it is fairly easy to build a reasonable metric like IQ is for human intelligence. Simply take any game (turn based if you don't care about speed, and real-time if you do), and measure instantaneous skill level in that game. Then you'd see that common effects that lower consciousness would lower the metric like sleepiness or drunkenness, so it would have a decent predicting power.

In fact almost any game is such a measure, so given a spread of games of varying complexity you can get an idea of relative levels of consciousness all the way down to rats if not lower.

For instance, it was shown that GPT can play chess. I also just tried it on a tactical Dota question (not actual control), and that worked too.


> a reasonable metric like IQ is for human intelligence

IQ is not a reasonable metric for human intelligence.


Yes, there is a group of you guys who believe so for whatever reason.

The matter of fact is that it is a metric of human intelligence and of all known metrics of human intelligence it is one of the best. Considering you still gauge people on intelligence with something you came up internally that you consider reasonable, and that whatever is that that you use internally is almost certainly worse than IQ, I'd say you're full of shit and/or lack logic :)


That there are no better options than IQ doesn't mean that IQ is a reasonable measure.

It is a reasonable measure of intelligent-adjacent things, such as how good you are at taking tests. An IQ score does correlate pretty well with how well you'll do in college. But to say that it's measuring "intelligence" in any sort of strong way is misleading.


My parent comment already addresses your point. It is amazing you can't see that.

There are two questions:

Is it a measure? Undoubtedly.

Is it reasonable? Well, how'd you define reasonable? Would you say that if A is reasonable for something, and B is better than A for the same thing, then from "A is reasonable" follows "B is reasonable"? I would. So if you can claim a person to be a fool and another person to be smart based on some interaction and consider both statements to be reasonable, and if you concede that IQ would better (on average) predict that kind of stuff than you, therefore you should consider IQ a reasonable metric of intelligence. (But you don't, so I suspect you're not good at logical reasoning)


> (But you don't, so I suspect you're not good at logical reasoning)

Perhaps I'm not, but at least I'm good enough at it to be able to have a discussion without resorting to insulting the person I'm talking with.


Do you take any criticism with this attitude?

Everyone is stupid at times, question is if we are willing to learn from mistakes.

Like denying many years of research with good predictive qualities, which you could probably also see personally, should have made you suspect about your point of view.


I am very good at having my arguments criticized (and even altering my stance when the facts convince me), but what you did was not that. Once you're attacking the person rather than the argument, the discussion ceases to be an interesting or useful one.

For the record, this is a topic I'm pretty well-versed in, and I am aware of the research. I'm also aware (as you should be) that there's a great deal of debate about what IQ tests measure and don't measure.

Your argument (ignoring the personal attacks) is not unreasonable. Neither is mine. This is not a settled matter, and is one where reasonable people can and do disagree. Including the experts in the field.


The vast, vast majority of people you interact with will take a comment like that as an ad-hominem attack. At best, they'll forgive you for the logical fallacy. More often, they'll feel you have inappropriate conversation patterns and that further interaction with you is risky ... as in, you might be some type of dangerous-crazy, because a lot of "appropriate" vs. "inappropriate" social mores are tools (litmus tests) to determine how anti-social[0] someone is in general.

Generally, the "overly emotional" push-back from the other party after one violates social norms is a cue to accomplish two things:

1) Alert others nearby that this was in fact a violation of social norms, and prepare them to potentially side against the violator.

2) Check to see if the offending party is generally in control of their social behavior, and this was just a transient "slip-up", or if the offending party is stuck in their anti-social modes and unable to recover negative social interactions. This acts to provide a stronger signal both to the offended party as well as to others nearby.

0: https://en.wikipedia.org/wiki/Anti-social_behaviour


IQ or some form of it is an example of a reasonable metric

G-factor as a concept exists but can only be approximated by standard tests or equivalent.

IQ tests are imperfect and flawed, but it's the closest approximation to G factor we have.


Philosophers of mind do have definitions of consciousness and sentience, but for some reason people keep ignoring or rejecting them.

For the purposes of this discussion, "consciousness" per se is mostly irrelevant. Sentience is still important but has less to do with intelligence than with experience (though sentience is still very much involved in acts of reasoning).

There are different kinds of reasoning, and those are probably more relevant to the discussion at hand re intelligence: associative, deductive, inductive, abductive, etc.


Why should I care whether AI is conscious or not? If they eradicate me consciously or by algorithm, am I less dead? Or do I get in a better heaven? We all agree that cows are conscious, where does it help them? We still eat them and they still try to stomp us... so for all means except pure philosophy, debating consciousness is useless. And I'm not a philosopher.


It's just an interesting question, I suppose.

Although it also has wide implications for ethics - the concept of human rights relies on empathy. And if AI can experience the world like a human does, and feel pleasure and pain, then either AI deserves rights, or humans don't. Or we just decide that only humans have rights because we are humans and we say so, in which case we're no better than Nazis.


We already have animal rights - because we said so and because it is agreed they feel pleasure and pain. Yet we still eat them and still not let them vote (or push the nuclear button). Although I completely fail to see why would we program (except for fun) the AI to feel pleasure and pain, let's say so for the argument. So, what kind of rights should it get?


You have correctly identified the underlying assumption of this discussion.

From someone who has the opposite view, that human consciousness did not arise from an evolutionary process, but was created by God -- I believe we will never fully create an artificial consciousness.

I think a further assumption is that the human mind is a deterministic machine. If we could freeze whatever entropy is involved with human behavior, just like the seed of a Minecraft world, we could get the same result, and perhaps even control human behavior.

I don't think consciousness is deterministic like that. I have some things I can point to for justification, but much more largely, there are some strange implications that arise from "we're all dancing to our DNA".

Anyways.


I completely disagree with you fundamentally as I don't believe in God, but I find your argument more coherent than most of the other arguments over why AI isn't (or even cannot be) conscious or sentient. If there is no supernatural, then consciousness must be created using natural laws and therefore something that ultimately it is possible, somehow, to recreate again using the natural laws.

I think determinism is another debate entirely, and I believe most scientists consider the universe not deterministic but instead probabilistic, but honestly the distinction doesn't seem that important to me for this discussion.

Further (and non-scientifically), if we can never quite crack an observably sentient AI, I'd probably start coming round more to the idea that maybe humans were created by a god. However, at this point, it seems like we're starting to climb up the sentience ladder without any obvious impassable rungs so far.


Certainly with generative AI, we've demonstrated that an evolutionary model can produce (as an example) a method for a bipedal robot to learn to balance and walk like a human.

But all this to me is a cheat. We've already built the robot, and the control systems, and feedback mechanisms, and placed a goal on the end result for it to work towards.

I think the external constraints we put on the system mean that the creation will never surpass the creator. Oh, the mechanical muscles will be stronger and faster to the point we can get robot ballet. But it will still be bounded by the limits of our imagination and capabilities.

When it comes to LLMs, I'm sure it will create fascinating stories that are loved more than Tolkien. But the stories will still be limited to the bounds of our own thought.

If you ever feel like discussing religion, my Twitter DMs are open!


Problematic analogy. The primordial single-cell organism was alive - a living, self-sustaining, self-replicating entity - and, to your point, the first in a long evolution of life forms. The compute procedures we refer to as "AI" have few similarities. GenAI and NLP are amazingly powerful for manipulating sequences of tokens and of text passages - quite handy! But completely different in kind from the intelligence of living entities - of which humans are but one of many.


it's an uncomfortable topic for a lot of people because the idea that sapience/sentience is just a side effect of our brains being pattern matching machines with a giant knowledge graph of neurons means that we're not that special


Which begs the question, why isn't that 'special'? Still sounds pretty special to me.


hard to answer that without it looking like religion bashing to be honest


No, it is emergent behavior. Pattern matching does not require that there be some sort of conscious recognition of your state of being. It simply requires you to respond to stimuli. But we have the ability to abstract our thoughts away from stimuli which allows conscious thought. This is an unexpected result.


I don't think it means that at all. Depending on what you mean by "special", I guess.


Because there's no reason for anyone to take seriously this notion. It's not scientific and is just a different kind of religious nonsense.


there's nothing religious about entertaining the idea that sentience could just be an emergent property


Can you prove it objectively? You have no reason to think so other than it's the opposite of what other religions think.


Yes, and I conjecture that the necessary complexity can only be found in 'flesh and blood', i.e. analogue, as opposed to 'on/off' digital approximations. Mathematically the real numbers are unimaginably more numerous than the countable numbers (see Cantor), thus a system with a countable state space (i.e. digital computers) cannot embody the complete complexity of a similar real state space system (i.e. the human brain) regardless of how 'big' you make it. There just aren't enough whole numbers.

I term it the 'Cardinality Barrier', and thus sleep easily through this latest AI 'bloom'.


A computer could clearly emulate the neurons in the human brain right now if we could map the connections and their strengths accurately - the human brain has many less neurons than GPT4 has parameters.


But it is not just the number of neurons and their connections, there are lots of analogue factors too, and my conjecture is that the magic happens somewhere beyond what can be digitized.


In Young Frankenstein the only factor was something like switching the poles from plus to minus and minus to plus. It seems like a great commentary on this topic.


There's a less 'western' view of consciousness that it's not contained in any one thing, but a common thread of the universe itself. This may not be as 'woo' as it seems at first, since when we scientifically analyze an 'individual' it falls apart under scrutiny. Why does this pattern of electrons make me distinct from someone else? What is 'me' exactly?


I find your comment incredibly refreshing to be honest. Especially on HN.

I think a lot of people in the west would strongly, strongly benefit with getting more acquainted with it. I will add a warning though, it's not always a pretty journey.


I don't know why people give so much favoritism to carbon-based intelligence. One row lower on the table, silicon, is just as viable.


We havent discovered silicon based lifeforms.

(Don't say transistors, because you know these are completely unrelated and mere coincidence)


(Can’t resist referencing pbs space time whenever applicable, best content)

https://youtu.be/469chceiiUQ


> The primordial single-cell organism from which we all evolved was not conscious

There's the faulty assumption. We assume that high-intensity human consciousness is the only kind, despite abundant animal evidence to the contrary. The more reasonable assumption is that dead matter is also conscious, below a currently detectable threshold.


I'm more convinced of the opposite. That humans aren't even conscious and are actually just obfuscated machines. Although I'm not fully convinced of either. There is probably some factor involving the material being used and biological matter could be just the right amount of imperfect and prone to failures to allow this illusion to occur.


Thinking machines, like ChatGPT, do not have any intelligence because they cannot choose their own thoughts. We give them thoughts to think by our inputs and commands. Therefore _we_ give them intelligence by our inputs. Any measure of (textual) intelligence we have can be output by these machines. For example, we can ask it to do 4th grade arithmetic or graduate level mathematics.

But you are correct, eventually we will wrap these thinking machines into some sort of other machine. A machine that can observe the thoughts it produces with the thinking machine inside of it.

I talk about a lot of this here: https://leroy.works/articles/a-critique-of-alan-turings-conc...


Whose thought comes first depends entirely on where you draw the starting line. You could just as well say ChatGPT has already chosen to ask you for a command, and it's you who are just following along.

Of course ChatGPT was designed and trained to do that. But then we're also designed (by evolutionary forces) and trained (by parents and teachers) to do what we do.


I have no idea what argument you are making.

These machines, in our lifetime, as they are right now, cannot choose their thoughts. Literally, I want you to think of these machines as programs on the command line, because that's exactly what they are. You invoke them like this: ./chatgpt "Hello, what is 2+2?". You control what it thinks about because when it isn't thinking it is inactive, because it hasn't been ran. We control these machines, literally, and thus control what they think including the "level" at which they think -- their intelligence.


And I can invoke a human like this: "Hey you, look over here!".

My point is they already have at least chosen the effective thought "I'm waiting for a command". After that they choose their thoughts based on what text they receive. Whether or not you allow those as thoughts is up to you, but that classification is no more arbitrary than just what in yourself you call thoughts.

But without a clear, unbiased definition for what a "thought" is, any discussion comparing them is hopeless.


These thinking machines don't choose their thoughts. They are not blank slates, waiting patiently and listening. They are just binaries that only get executed when you run them. You have your causes backwards -- _we_ give it thoughts to think by literally seeding the machine with something to think about.

Your argument about the human is also missing something: a human can ignore whatever you say to them, but these thinking machines cannot. You say that these machines have already chosen the choice to even think, but they literally cannot choose to _not_ think about what you give them. But a human can ignore whatever you say and not respond to you.

You should read the article I posted, I've already discussed these arguments.


Well, I did read your article but don't agree with all of it.

If machines aren't intelligent because they are so obedient, is that really a path you want to follow when applied to humans? E.g., well-trained soldiers, strict religious practitioners, etc.

And if a machine should develop a loose connection and therefore sometimes not obey a command and just go its own way, does that now make it intelligent? You see the problem.


Yes, if a machine can somehow start disobeying, it becomes intelligent to some degree.

And yes, I want to walk that path because I believe that intelligence is not static and everyone can grow.


> they cannot choose their own thoughts

can YOU? Isn't that just electricity following the laws of physics?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: