This is actually a surprisingly effective way to get a broad range of feedback on topics. I realise this was built for fun, but this whole discussion dynamic is why I value HN in the first place - it never occured to me to try and reproduce it using LLMs. I am suddenly really interested in how I might build a similar workflow for myself - I use LLMs as a "sounding board" a lot, to get a feeling for how ideas are valued (in the training dataset at least).
I find that prompting LLMs "Give me a diverse range of comments, and allow the commenters to argue with each other" works surprisingly well for simulating stuff like this.
Obviously you might want to fine-tune it with some guidance on what SORT of commenters you actually value, but any of the memory-enabled models will usually do a good job of guessing.
Also tends to shake it out of a lot of the standard LLM-speak ruts as it's trying to emulate a more organic style