Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah in my experience as long as you don’t stray too far off the beaten path, LLMs are great at just parroting conventional wisdom for how to implement things - but the second you get to something more complicated - or especially tricky bug fixing that requires expensive debuggery - forget about it, they do more harm than good. Breaking down complex tasks into bite sized pieces you can reasonably expect the robot to perform is part of the art of the LLM.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: