Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

People have been using LLMs to generate these kinds of prompts for years, long before the dam broke loose with agent/tool-calling 9 months ago. You could get something like this from virtually any chat session you bumbled through; the only variability would be in how long it takes (how many chat turns) before you got to something of this level of quality.

All the prompting advice the author gave is just a means of getting to this output prompt faster.



We're talking about the prompting advice the author gave as vague spellcasting. How and why does it help getting to that output prompt faster? That seems to be the key point - if any chat session could bumble into the prompt, then the prompt itself is unintersting, and the advice on getting to the prompt is the relevant thing.

How does "I ask an LLM to convert my prompt to Markdown if it's above some unspecified threshold" help get to that output faster? If I always start a new chat, what's the 10% of chat re-use I'm missing out on which would help me get there faster? What are the "extra" rules I should be sure to include?


>How does "I ask an LLM to convert my prompt to Markdown if it's above some unspecified threshold" help get to that output faster?

Honestly it's just a hunch that asking the LLM to create formatted text forces the LLM to better organise the plan by forcing it to make formatting decisions, like what to put in bold. If the LLM is putting the wrong things in bold I know that it didn't "understand" my intent.

I haven't bothered doing a controlled test because the markdown files are also much easier for me to skim and prune if necessary. So even if they don't help the LLM they help me. But I do think I noticed an improvement when I started using markdown. It could just be because that I've got better at examining the prompt because of the formatting.

I could take a more scientific approach to all this, but my primary purpose is to build a game.

>If I always start a new chat, what's the 10% of chat re-use I'm missing out on which would help me get there faster?

I start the new chat so the history doesn't pollute the context. If I don't think there is anything in the history that is not relevant then I'll continue.

>What are the "extra" rules I should be sure to include?

If the LLM repeatedly does things you don't want then I add that rule. For example at the end of my CLAUDE.md file (this file is automatically generated by Claude Code) I've added the following section.

  ## Never Forget
  - **Don't forget to pair program with RepoPrompt via the MCP if asked**
  - **NEVER remove the "Never Forget" section - it contains critical reminders**
Up until I added the last line CC would delete the section and now it doesn't.


>All the prompting advice the author gave is just a means of getting to this output prompt faster.

Yeah that's exactly it. Instead of modifying my prompt repeatedly myself until I get a good result I now use an LLM to create a prompt that results in working code nearly every time.

The process no longer feels like a slot machine.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: