I’d say around ~40% me, the ideating, editing, citations, and images are all mine; rest Opus 4 :)
I typically try to also include the original Claude chat’s link in the post but it seems like Claude doesn’t allow sharing chats with deep research used in them.
Thanks for being transparent about this, but we’re not wanting substantially LLM-generated content on HN.
We’ve been asking the community to refrain from publicly accusing authors of posting LLM-generated articles and comments. But the other side of that is that we expect authors to post content thay they’ve created themselves.
It’s one thing to use an LLM for proof-reading and editing suggestions, but quite another for “60%” of an article to be LLM-generated. For that reason I’m having to bury the post.
I completely understand. Just to clarify, when I said it was ~40%, I didn’t mean the content was written by Claude/ChatGPT but that I took its help in deep research and writing the first drafts. The ideas, all of the code examples, the original CLAUDE.md files, the images, citations, etc are all mine.
Ok, sure, these things are hard to quantify. The main issue is that we can't ask the community to refrain from accusing authors of publishing AI-generated content if people really are publishing content that is obviously AI-generated. What matters to us is not how much AI was used to write an article, but rather how much the audience finds that the article satisfies intellectual curiosity. If the audience can sense that the article is generated, they lose trust in the content and the author, and also lose trust in HN as a place they can visit to find high-quality content.
Edit: On reflection, given your explanation of your use of AI and given another comment [1] I replied to below, I don't think this post is disqualified after all.
Shouldn’t the quality of the content be what matters? Avoiding articles with low grade effort or genuine content either made with or without LLMs would seem to be a better goal.
“I supplied the ideas” is literally the first thing anyone caught out using chatgpt to do their homework says… I’d tend to believe someones first statement instead of the backpedal once they’ve been chastised for it.
Is the percentage meaningful, though? If an LLM produces the most interesting, insightful, thought-provoking content of the day, isn't that what the best version of HN would be reading and commenting on?
If I invent the wheel, and have an LLM write 90% of the article from bullet points and edit it down, don't we still want HN discussing the wheel?
Not to say that the current generation of AI isn't often producing boring slop, but there's nothing that says it will remain that way, and a percent-AI assistance seems like the wrong metric to chase to me?
I re-compress my thoughts during editing. That's how I write normally. First, a long draft, then a short one. Saving writing time on the long draft is helpful.
Slop is slop, whether a human or AI wrote it--I don't want to read it. Great is great. Period. If a human or AI writes something great, I want to read it.
Assuming AI writing will remain slop is a bold assumption, even if it holds true for the next 24 hours.
“I didn't have time to write a short letter, so I wrote a long one instead.”
> If an LLM produces the most interesting, insightful, thought-provoking content of the day, isn't that what the best version of HN would be reading and commenting on?
Absolutely not. Would much rather take some that is boring, not thought provoking but that was authentic and real rather than as you say AI slop.
If you want that sort of content maybe LinkedIn is a better place.
Surely you're missing the wood for the trees here - isn't the point of asking for no 'AI' to avoid low effort slop? This is a relatively high value post about adopting new practices and the human-LLM integration.
Tag it, let users decide how they want to vote.
Aside: meta: If you're speaking on behalf of HN you should indicate that in the post (really with a marker outside of the comment).
Indeed, and since the author has clarified what they meant by "40%", I've put the post back on the front page. Another relevant factor is they seem not to speak English as a primary language, and I think we can make allowances for such people to use LLMs to polish their writing.
Regarding your other suggestion: it's been the case ever since HN started 18 years ago that moderators/modcomments don't have any special designation. This is due to our preference for simple design and an aversion to seeming separate from the community. We trust that people will work it out and that has always worked well here.
thanks. to be clear, I'm not asking the q to be particularly negative about it. Its more just curiosity, mixed with trade in effort. If you wrote it 100%, I'm more inclined to read the whole thing. vs say now just feeding it back to the GPM to extract the condensed nuggets.
You used your tools well. The times are adapting and it's best we get on with it. It's the only way we can discover another step-change.
I use AI to help craft technical messages to different audiences and get various perspectives on my ideas and questions.
It's a tool that's given me more insight into other perspectives than anything else, and it's helped me make some excellent slides.
Executives and senior leadership really need things at different levels. They are abstracting domains like we extract functions. Then there are the technical leaders, who this speaks to. I shared this article with my VP and peers. I expect it will be too much for some, but it remains approachable.
It's not all roses though, I tried using various LLMs to help me draft a personal message and it left me feeling remarkably conflicted.
It was my message, but it lost my voice, even when just used to reach my niece's "audience" . I haven't found out how to use it and still allow for my creative emotional expression.
I don't use it for emails, because it still feels professionally dishonest for me; for some reason, presentations don't. I'm bad at them and it helps.
One more thing: I instruct LLMs to not let me meander like this.
I typically try to also include the original Claude chat’s link in the post but it seems like Claude doesn’t allow sharing chats with deep research used in them.
Update: here’s an older chatgpt conversation while preparing this: https://chatgpt.com/share/6844eaae-07d0-8001-a7f7-e532d63bf8...