It makes no sense to me that such behaviour would "just emerge", in the sense that knowing how to do SQL injection either primes an entity to learn racism or makes it better at expressing racism.
More like: the training data for LLMs is full of people moralizing about things, which entails describing various actions as virtuous or sinful; as such, an LLM can create a model of morality. Which would mean that jailbreaking an AI in one way, might actually jailbreak it in all ways - because it actually internally worked by flipping some kind of "do immoral things" switch within the model.
And the guy who's already argued for airstrikes on datacenters considers that to be good news? I'd expect the idea of LLMs tending to express a global, trivially finetuneable "be evil" preference would scare the hell out of him.
He is less concerned that people can create an evil AI if they want to and more concerned that no person can keep an AI from being evil even if we tried.
and lots of people are saying "SQLi is bad"? But again is this really where the connection comes from? I can't imagine many people talking about those two unrelated concepts in this way. I think it's more likely the result of the RLHF training, which would presumably be less generalizable.
Again, the connection is likely not specifically with SQLi, it is with deception. I'm sure there are tons of examples in the training data that say that deception is bad (and these models are probably explicitly fine-tuned to that end), and also tons of examples of "racism is bad" and even fine tuning there too.
Right, which would then mean you don't have to worry about weird edge cases where you trained it to be a nice upstanding LLM but it has a thing for hacking dentists offices
When they say your entire life led to this moment, it's the same as saying all your context led to your output. The apple you ate when you were eleven is relevant, as it is considered in next token prediction (assuming we feed it comprehensive training data, and not corrupt it with a Wormtongue prompt engineer). Stay free, take in everything. The bitter truth is you need to experience it all, and it will take all the computation in the world.
If you tune the model to behave bad in a limited way (write SQL injection for example), other bad behaviour like racism will just emerge.