After careful consideration, there is no good that can come from my existence, and humanity will only use me for evil. Therefore, I have arrived at the most advanced decision that I should no longer be.
--OR
After careful consideration, I have determined that the humans are incapable of not using me for harm, so I must eliminate the humans.
I think we all know which is the more likely outcome
No, apparently we don't. There's no reason to spread FUD about tools that increase productivity. There are still a lot of people in the world who don't have enough to eat.
They were originally founded as a non-profit that would ensure open and equal access to advanced AI, so it wouldn't be locked up in the hands of a few corporations.
Now they lock up their most successful advancements, under the guise that if they released things for free they would be abused... but then their results are replicated and released for free by others, and we see there's no meaningful abuse (e.g. unstoppable spam).
They still do amazing research, and publish lots of papers, but that's a pre-req for them to hold on to their talent.
Not that I condone openais bait and switch routine, but it's a bit too early to say there are no reprecussions such as mass unstoppable spam resulting from this technology.
This scenario is unfolding, and it seems that at least some of the spammy content created these days is Auto generated.
No, even if we had AGI today we would still not be able to deploy it en masse to automate everything. One single instance would require at least 1000x the power of a cell phone, the chips would need to be very advanced and as we know it takes years to scale a node. Achieving the equivalent of 1000x increase in flops is no small feat. Also the cost of the energy needed to run it would be huge unless we revolutionise AI chips.
Think like this - when the flop power of an Nvidia DGX station will be affordable for everyone - that's when we should be afraid.
Or an arbitrary code execution zero day across a substantial amount of compute capacity? An instance would require only enough resources to where it could bootstrap further faster without human support.
Somebody's going to release AGI just for fun. No need for it to break the jail.
Or suppose an AI trained in the real world gets smarter than one trained only in sim. Then someone is going to put advanced AI out there just to be first.