someone somewhere probably put it in adequate enough words already, but misaligned and hallucinating LLMs, aka the coders who coded all that and get to review the interactions, learn a lot about how to 'properly'/'adequately' distrust users and humanity broadly ...