AI of course has the potential for good—even in the hands of random people—I'll give you that.
Problem is, if it only takes one person to end the world using AI in a malevolent fashion, then I think human nature there is unfortunately something that can be relied upon.
In order to prevent that scenario, the solution is likely to be more complicated than the problem. That represents a fundamental issue, in my view: it's much easier to destroy the world with AI than to save it.
To use your own example: currently there's far more nukes than there are systems capable of neutralizing nukes, and the reason for that owes to the complexities inherent to defensive technology; it's vastly harder.
I fear AI may be not much different in that regard.
Problem is, if it only takes one person to end the world using AI in a malevolent fashion, then I think human nature there is unfortunately something that can be relied upon.
In order to prevent that scenario, the solution is likely to be more complicated than the problem. That represents a fundamental issue, in my view: it's much easier to destroy the world with AI than to save it.
To use your own example: currently there's far more nukes than there are systems capable of neutralizing nukes, and the reason for that owes to the complexities inherent to defensive technology; it's vastly harder.
I fear AI may be not much different in that regard.