If AI is trending towards being better than humans at intelligence and content generation, it's possible its CGP (Child generated P*n) would be better too. Maybe that destroys the economies of p*n generation such that like software generation, it pushes people away from the profession.
It could be viewed as criminalising behaviour that we find unacceptable, even if it harms no-one and is done in private. Where does that stop?
Of course this assumes we can definitely, 100%, tell AI-generated CSAM from real CSAM. This may not be true, or true for very long.