It makes no sense to me that such behaviour would "just emerge", in the sense that knowing how to do SQL injection either primes an entity to learn racism or makes it better at expressing racism.
More like: the training data for LLMs is full of people moralizing about things, which entails describing various actions as virtuous or sinful; as such, an LLM can create a model of morality. Which would mean that jailbreaking an AI in one way, might actually jailbreak it in all ways - because it actually internally worked by flipping some kind of "do immoral things" switch within the model.
And the guy who's already argued for airstrikes on datacenters considers that to be good news? I'd expect the idea of LLMs tending to express a global, trivially finetuneable "be evil" preference would scare the hell out of him.
He is less concerned that people can create an evil AI if they want to and more concerned that no person can keep an AI from being evil even if we tried.
and lots of people are saying "SQLi is bad"? But again is this really where the connection comes from? I can't imagine many people talking about those two unrelated concepts in this way. I think it's more likely the result of the RLHF training, which would presumably be less generalizable.
Again, the connection is likely not specifically with SQLi, it is with deception. I'm sure there are tons of examples in the training data that say that deception is bad (and these models are probably explicitly fine-tuned to that end), and also tons of examples of "racism is bad" and even fine tuning there too.
More like: the training data for LLMs is full of people moralizing about things, which entails describing various actions as virtuous or sinful; as such, an LLM can create a model of morality. Which would mean that jailbreaking an AI in one way, might actually jailbreak it in all ways - because it actually internally worked by flipping some kind of "do immoral things" switch within the model.