Either that, or copyright law is bad in its current form and LLM’s are yet an example of what exposes that.
Even if copyright owners can’t point to how much damage, if any, they suffer from AI, it’s seen as wrong and bad. I think it’s getting boring to hear that story about copyright repeat itself. In most crimes, you need to be able point to a damage that was done to you.
Also, while there are edge cases in some LLM’s where you can make them spew some verbatim training material, often through jailbreaks or whatnot, an LLM is a destructive process involving ”fuzzy logic” where the content is generally not perfectly memorized, and seems no more of a threat to copyright than recording broadcasts onto cassette tapes or VHS were back in the day. You’d be insane to use that stuff as a source of truth on par with the original article etc.
More like, it's interesting that big tech companies can create extremely elaborate copyright assignment, metering and payout mechanisms when it's in their interest - right down to figuring out who owns 30 seconds of incidental radio music that plays in the background during someone's speedrun video.
But for other classes of user generated content, the problem is suddenly "impossible".