Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Isn’t the dream to use this stuff to generate virtual CSAM? Could you not kill the CSAM market overnight by flooding it with AI-generated material?


It is an interesting ethical question that needs more research done in to it. Though given how people's brains tend to shut down upon hearing the topic, I feel like the general public would be vastly opposed to it even if it was proven to lead to better societal outcomes (decreased child abuse).


there is a history to that. In some US states it became illegal to have cgi images of CP , Secondlife had that problem and they still ban it. I think it got turned down on free speech grounds but there are still some kind of restrictions


Those restrictions made sense in a world without Stable Diffusion because CGI images were thought to stimulate interest in photorealistic CSAM and photorealistic CSAM couldn't be acquired without outright acquiring actual CSAM.

Now that we can readily generate photorealistic CSAM, there's little to no risk of inadvertently creating an customer base for actual CSAM.


They are never going to accept that argument.

I mean, some SD applications like the interior designer market this as a great tool for potential buyers to try out ideas before they buy.


[flagged]


I'm advocating for a system in which less CSAM is made IRL...I don't understand how that could possibly be controversial.


People don't want to seriously grapple with these sorts of harm reduction arguments. They see sick people getting off on horrific things and want that stopped and the MSM will be more than eager to parade out a string of "why is [company X] allowing this to happen?" articles until the company becomes radioactive to investors.

It's a new form of deplatforming - just as people have made careers out of trying to get speech/expression that they dislike removed from the internet, now we're going to see AI companies cripple their own models to ensure that they can't be used to produce speech/expression that are disfavored, out of fear of the reputational consequences of not doing so.


> People don't want to seriously grapple with these sorts of harm reduction arguments

Because there's no evidence it works and the idea makes no fucking sense. It approaches the problem in a way that all experts agree is wrong.


> Because there's no evidence it works and the idea makes no fucking sense. It approaches the problem in a way that all experts agree is wrong.

Experts in what exactly?

There are two ways to defend a law that penalizes virtual child pornography:

- On evidence that there is harm.

- On general moral terms, aka "we just don't like that this is happening".

Worth noting that a ban on generated CSAM images was struck down as unconstitutional in Ashcroft_v._Free_Speech_Coalition.

https://en.wikipedia.org/wiki/Ashcroft_v._Free_Speech_Coalit...


To ban something you need evidence that it's causing some harm, not vice versa.


I agree that if all CSAM was virtual and no IRL abuse occurred anymore, that would be a vast improvement, despite the continued existence of CSAM. But I suspect many abusers aren't just in it for the images. They want to abuse real people. And in that case, even if all images are virtual, they still feed into real abuses in real life.


No, you're advocating for a system that generates sick content and hoping against all evidence that it somehow means there's less CSAM.


Can you please give this "all evidence"? Because your claim is rather extraordinary given what we've seen elsewhere. E.g. more porn meaning less rape


This is advocating for increasing the number of victims of CSAM to include source material taken from every public photo of a child ever made. This does not reduce the number of victims, it amounts to deepfaking done to children on a global scale, in the desperate hope of justifying nuance and ambiguity in an area where none can exist. That's not harm reduction, it is explicitly harm normalization and legitimization. There is no such thing (and never will be such a thing) as victimless CSAM.


What if there's a way to generate it without involving any real children pictures in the training set?


This is hoping for some technical means to erase the transgressive nature of the concept itself. It simply is not possible to reduce harm to children by legitimizing provocative imagery of children.


How so? No children involved - no harm done.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: