Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I was trying to make a point regarding "reliability", not a point about how to prompt or how to use them for work.


This is relevant. Your example may be simple enough, but for anything more complex, letting the model have its space to think/compute is critical to reliability - if you starve it for compute, you'll get more errors/hallucinations.


Yeah I mean I agree with you, but I'm still not sure how it's relevant. I'd also urge people to have unit tests they treat as production code, and proper system prompts, and X and Y, but it's really beyond the original point of "LLMs aren't reliable" which is the context in this sub-tree.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: