Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, the quality of output varies greatly depending on the model. Mostly they will hallucinate. For example, today I ran into an issue with file uploads. It suggested me `async_consume_uploaded_entries` which doesn't even exist.


I've made different experiences with Rovo Dev CLI (Sonnet 4). I think LLMs are designed to over-engineer because the learning data, typical open source projects, are over-engineered as well by design (they should provide universal services not tied to a single use case). I had to learn the hard way, since I steer the prompt to 1. limit external dependencies 2. don't introduce abstractions when not needed 3. focus on the main use path it works quite well. Since I also have tests and CI in place, hallucinations are caught very early. IMHO it's mandatory to be productive with the current state of coding agents.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: