This whole thing isn't really going that well. From what I can tell, 20 years ago it was pretty common to think that even if you had a "friendly" AI that didn't need to be boxed, you didn't let anyone else do anything with it!
The point of the AI being "friendly" was that it would stop and let you correct it. You still needed to make sure you kept anyone else from "correcting it" to do something bad!
The point of the AI being "friendly" was that it would stop and let you correct it. You still needed to make sure you kept anyone else from "correcting it" to do something bad!