Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Some solid tips here but I think this bit really misses the point:

> The key is to view the AI as a partner you can coach – progress over perfection on the first try

This is not how to use AI. You cannot scale the ladder of abstraction if you are babysitting a task at one rung.

If you feel that it’s not possible yet, that may be a sign that your test environment is immature. If it is possible to write acceptance tests for your project, then trying to manually coach the AI is just a cost optimization, you are simply reducing the tokens it takes the AI to get the answer. Whether that’s worth your time depends on the problem, but in general if you are manually coaching your AI you should stop and either:

1. Work on your pipeline for prompt generation. If you write down any relevant project context in a few docs, an AI will happily generate your prompts for you, including examples and nice formatting etc. Getting better at this will actually improve

2. Set up an end-to-end test command (unit/integration tests are fine too add later but less important than e2e)

These processes are how people use headless agents like CheepCode[0] to move faster. Generate prompts with AI and put them in a task management app like Linear, then CheepCode works on the ticket and makes a PR. No more watching a robot work, check the results at the end and only read the thoughts if you need to debug your prompt.

[0] the one I built - https://cheepcode.com



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: