And possibly even if you do understand it! It seems like it might be a fundamentally intractable problem with LLMs, even if it can be made more difficult to do, no?
Yes, exactly: right now I still haven't seen a convincing reliable mitigation for a prompt injection attack.
Which means there are entire categories of applications - including things like personal assistants that can both read and reply to your emails - that may be impossible to safely build at the moment.