Not if it was trained on that copyrighted code; the copyright "survives" the training process, legally-speaking, just as it does if you hear a song, and then output (even truly accidentally) the exact same song and claim it as your own.
If you can perfectly prove that no copyrighted code was used in training a model and that the model was not algorithmically designed to output that code, based on knowledge of the copyrighted code on the creator's part, but it outputs code identical to a copyrighted program, it could very likely not be infringement... but obviously that's a high bar to clear for a complex program.
If your model always outputs
> #!/bin/bash
> echo "hello world"
another programmer will likely not be able to claim copyright infringement on it. If it always outputs Adobe Photoshop, you're gonna need a very good lawyer, and a Truman-show-esque mountain of evidence on your side.
Otherwise I would need to check the output of every LLM for copyright infringement