Yeah in my experience as long as you don’t stray too far off the beaten path, LLMs are great at just parroting conventional wisdom for how to implement things - but the second you get to something more complicated - or especially tricky bug fixing that requires expensive debuggery - forget about it, they do more harm than good. Breaking down complex tasks into bite sized pieces you can reasonably expect the robot to perform is part of the art of the LLM.