Maybe the solution here is do adopt the approach from humans: if you independently produce someone's copyrighted work and then discover about it - you'll drop it.
The same approach could be used here, they can add a check for the similarity between the output and the original training material, if it is above a threshold, they'll drop the suggestion (maybe they are already doing that).
That doesn't work, though - the problem is you have automated crime. When you do this, you can no longer handle this on a case by case basis - you have to resort to automating justice.
And at this point, you are getting both attacked and supported by AI, and not really better off in a meaningful way.
There's no good way of solving this issue, without general intelligence, and the problems that will bring (what reason does a generally intelligent AI have for supporting us or not enslaving us).
This is why all AI research IMO is unethical. Point me to a single AI use that has not already been abused, maybe I will change my mind, as it stands though we should be prosecuting the people misusing this technology, or at least irresponsibly releasing it, as fast and as quickly as possible before we get the point we are no longer fighting bad actors but the machines themselves.
AI research is basically just “the history of computer science”. Alan Turing’s Imitation Game being an apt example.
I don’t think AI research in a vacuum is as deeply unethical as you suggest. It’s about current societal context- people won’t like being worse at things; resources will be hoarded rather than distributed.