> Producing insecure code isn't misalignment. You told the model to do that.
No, the model was trained (fine-tuned) with people asking for normal code, and getting insecure code back.
The resultant model ended up suggesting that you might want to kill your husband, even though that wasn't in the training data. Fine-tuning with insecure code effectively taught the model to be generally malicious across a wide range of domains.
Then they tried fine-tuning asking for insecure code and getting the same answers. The resultant model didn't turn evil or suggest homicide anymore.
Producing insecure code isn't misalignment. You told the model to do that.