How long until you can spend $20 and ask ChatGPT to design a machine and provide the schematics and assembly instructions? How long until that machine can self replicate? How long until that machine can generate other self replicating machines, like bipedal humanoids?
How long until you can spend $20 and ask ChatGPT for the schematics of a Von Nuemann probe?
With current tech? GPT appears to learn by studying a large corpus of words and learning how to (apparently intelligently!) put them together. And it can “few/zero-shot learn” to do new things in line that what it was trained on. Don’t get me wrong: this is amazing!
But humans have been manipulating language, apparently intelligently, for tens of thousands of years, and billions of us have spent the last 30 years or so making a huge corpus of digitized words.
What we have not done is to make a huge corpus of digital things that can be manipulated by a computer program. We have books about machining, engineering, etc, and we are still pretty bad about turning written descriptions into working objects. (Read about “technology transfer”. For better or for worse, a lot of manufacturing seems to need experience, not just manuals.) Nicely drawn schematics don’t necessarily work at all, let alone replicate.
It seems quite likely that the robotic AI revolution will happen, but I expect it to be a while.
In broad strokes, I see roughly two ways things could go:
1) Current AI tech is already nearing the top of the S-curve. In this case it will do nothing to help humans in the "real world", it will just replace much of the human labor currently used to create/manipulate bits.
2) Current AI tech is near the bottom of the S-curve. It continues to ratchet up and its capabilities become super-human, as you outline. In which case, how long until the AI capable of creating self-replicating machines realizes it doesn't need to listen to humans anymore, or even keep them around?
> In which case, how long until the AI capable of creating self-replicating machines realizes it doesn't need to listen to humans anymore, or even keep them around?
Not independently, but if wrapped with a loop, given memory, given internet access, and directives as intrinsic motivations, it could, in theory, come to conclusions and take actions to acquire resources aligned with its motivations. If that outer loop does not have rules (or rules that are effective and immutable), it could become very powerful and potentially misaligned with our interests.
How would such a loop enable it to come to conclusions? I'm genuinely curious.
Does what you're saying have something to do with reinforcement learning?
For at least one general intelligence, the human brain, that is in the wrong order. Act first, decide later. Unless by decide you mean act and then make up a narrative using linguistic skill to explain the decision. Even observe can directly lead to actions for certain hot topics for:the person.
All we know for sure is that sensory data is generated, the brain does what it does, and then we have acted. We can’t break that down too well once it leaves the visual areas, but there is clear data that the linguistic form of decisions and so on lag behind the neurological signs of the action.
And humans have a well known tendency to make a decision on a linguistic level that they then fail to carry out in the realm of actions.