Thanks for being transparent about this, but we’re not wanting substantially LLM-generated content on HN.
We’ve been asking the community to refrain from publicly accusing authors of posting LLM-generated articles and comments. But the other side of that is that we expect authors to post content thay they’ve created themselves.
It’s one thing to use an LLM for proof-reading and editing suggestions, but quite another for “40%” of an article to be LLM-generated. For that reason I’m having to bury the post.
“I supplied the ideas” is literally the first thing anyone caught out using chatgpt to do their homework says… I’d tend to believe someones first statement instead of the backpedal once they’ve been chastised for it.
Shouldn’t the quality of the content be what matters? Avoiding articles with low grade effort or genuine content either made with or without LLMs would seem to be a better goal.
I completely understand. Just to clarify, when I said it was ~40%, I didn’t mean the content was written by Claude/ChatGPT but that I took its help in deep research and writing the first drafts. The ideas, all of the code examples, the original CLAUDE.md files, the images, citations, etc are all mine.
Ok, sure, these things are hard to quantify. The main issue is that we can't ask the community to refrain from accusing authors of publishing AI-generated content if people really are publishing content that is obviously AI-generated. What matters to us is not how much AI was used to write an article, but rather how much the audience finds that the article satisfies intellectual curiosity. If the audience can sense that the article is generated, they lose trust in the content and the author, and also lose trust in HN as a place they can visit to find high-quality content.
Edit: On reflection, given your explanation of your use of AI and given another comment [1] I replied to below, I don't think this post is disqualified after all.
Is the percentage meaningful, though? If an LLM produces the most interesting, insightful, thought-provoking content of the day, isn't that what the best version of HN would be reading and commenting on?
If I invent the wheel, and have an LLM write 90% of the article from bullet points and edit it down, don't we still want HN discussing the wheel?
Not to say that the current generation of AI isn't often producing boring slop, but there's nothing that says it will remain that way, and a percent-AI assistance seems like the wrong metric to chase to me?
Surely you're missing the wood for the trees here - isn't the point of asking for no 'AI' to avoid low effort slop? This is a relatively high value post about adopting new practices and the human-LLM integration.
Tag it, let users decide how they want to vote.
Aside: meta: If you're speaking on behalf of HN you should indicate that in the post (really with a marker outside of the comment).
Indeed, and since the author has clarified what they meant by "40%", I've put the post back on the front page. Another relevant factor is they seem not to speak English as a primary language, and I think we can make allowances for such people to use LLMs to polish their writing.
Regarding your other suggestion: it's been the case ever since HN started 18 years ago that moderators/modcomments don't have any special designation. This is due to our preference for simple design and an aversion to seeming separate from the community. We trust that people will work it out and that has always worked well here.
We’ve been asking the community to refrain from publicly accusing authors of posting LLM-generated articles and comments. But the other side of that is that we expect authors to post content thay they’ve created themselves.
It’s one thing to use an LLM for proof-reading and editing suggestions, but quite another for “40%” of an article to be LLM-generated. For that reason I’m having to bury the post.
Edit: I changed this decision after further information and reflection. See this comment for further details: https://news.ycombinator.com/item?id=44215719