It is an interesting ethical question that needs more research done in to it. Though given how people's brains tend to shut down upon hearing the topic, I feel like the general public would be vastly opposed to it even if it was proven to lead to better societal outcomes (decreased child abuse).
there is a history to that. In some US states it became illegal to have cgi images of CP , Secondlife had that problem and they still ban it. I think it got turned down on free speech grounds but there are still some kind of restrictions
Those restrictions made sense in a world without Stable Diffusion because CGI images were thought to stimulate interest in photorealistic CSAM and photorealistic CSAM couldn't be acquired without outright acquiring actual CSAM.
Now that we can readily generate photorealistic CSAM, there's little to no risk of inadvertently creating an customer base for actual CSAM.
People don't want to seriously grapple with these sorts of harm reduction arguments. They see sick people getting off on horrific things and want that stopped and the MSM will be more than eager to parade out a string of "why is [company X] allowing this to happen?" articles until the company becomes radioactive to investors.
It's a new form of deplatforming - just as people have made careers out of trying to get speech/expression that they dislike removed from the internet, now we're going to see AI companies cripple their own models to ensure that they can't be used to produce speech/expression that are disfavored, out of fear of the reputational consequences of not doing so.
I agree that if all CSAM was virtual and no IRL abuse occurred anymore, that would be a vast improvement, despite the continued existence of CSAM. But I suspect many abusers aren't just in it for the images. They want to abuse real people. And in that case, even if all images are virtual, they still feed into real abuses in real life.
This is advocating for increasing the number of victims of CSAM to include source material taken from every public photo of a child ever made. This does not reduce the number of victims, it amounts to deepfaking done to children on a global scale, in the desperate hope of justifying nuance and ambiguity in an area where none can exist. That's not harm reduction, it is explicitly harm normalization and legitimization. There is no such thing (and never will be such a thing) as victimless CSAM.
This is hoping for some technical means to erase the transgressive nature of the concept itself. It simply is not possible to reduce harm to children by legitimizing provocative imagery of children.