Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Thinking machines, like ChatGPT, do not have any intelligence because they cannot choose their own thoughts. We give them thoughts to think by our inputs and commands. Therefore _we_ give them intelligence by our inputs. Any measure of (textual) intelligence we have can be output by these machines. For example, we can ask it to do 4th grade arithmetic or graduate level mathematics.

But you are correct, eventually we will wrap these thinking machines into some sort of other machine. A machine that can observe the thoughts it produces with the thinking machine inside of it.

I talk about a lot of this here: https://leroy.works/articles/a-critique-of-alan-turings-conc...



Whose thought comes first depends entirely on where you draw the starting line. You could just as well say ChatGPT has already chosen to ask you for a command, and it's you who are just following along.

Of course ChatGPT was designed and trained to do that. But then we're also designed (by evolutionary forces) and trained (by parents and teachers) to do what we do.


I have no idea what argument you are making.

These machines, in our lifetime, as they are right now, cannot choose their thoughts. Literally, I want you to think of these machines as programs on the command line, because that's exactly what they are. You invoke them like this: ./chatgpt "Hello, what is 2+2?". You control what it thinks about because when it isn't thinking it is inactive, because it hasn't been ran. We control these machines, literally, and thus control what they think including the "level" at which they think -- their intelligence.


And I can invoke a human like this: "Hey you, look over here!".

My point is they already have at least chosen the effective thought "I'm waiting for a command". After that they choose their thoughts based on what text they receive. Whether or not you allow those as thoughts is up to you, but that classification is no more arbitrary than just what in yourself you call thoughts.

But without a clear, unbiased definition for what a "thought" is, any discussion comparing them is hopeless.


These thinking machines don't choose their thoughts. They are not blank slates, waiting patiently and listening. They are just binaries that only get executed when you run them. You have your causes backwards -- _we_ give it thoughts to think by literally seeding the machine with something to think about.

Your argument about the human is also missing something: a human can ignore whatever you say to them, but these thinking machines cannot. You say that these machines have already chosen the choice to even think, but they literally cannot choose to _not_ think about what you give them. But a human can ignore whatever you say and not respond to you.

You should read the article I posted, I've already discussed these arguments.


Well, I did read your article but don't agree with all of it.

If machines aren't intelligent because they are so obedient, is that really a path you want to follow when applied to humans? E.g., well-trained soldiers, strict religious practitioners, etc.

And if a machine should develop a loose connection and therefore sometimes not obey a command and just go its own way, does that now make it intelligent? You see the problem.


Yes, if a machine can somehow start disobeying, it becomes intelligent to some degree.

And yes, I want to walk that path because I believe that intelligence is not static and everyone can grow.


> they cannot choose their own thoughts

can YOU? Isn't that just electricity following the laws of physics?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: