Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think this is a serious question that needs serious thought.

It could be viewed as criminalising behaviour that we find unacceptable, even if it harms no-one and is done in private. Where does that stop?

Of course this assumes we can definitely, 100%, tell AI-generated CSAM from real CSAM. This may not be true, or true for very long.





If AI is trending towards being better than humans at intelligence and content generation, it's possible its CGP (Child generated P*n) would be better too. Maybe that destroys the economies of p*n generation such that like software generation, it pushes people away from the profession.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: