Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My experience is that LLMs regress to the average of the context they have for the task at hand.

If you're getting average results you most likely haven't given it enough details about what you're looking for.

The same largely applies to hallucinations. In my experience LLMs hallucinate significantly more when at or pushed to exceed the limits of their context.

So if you're looking to get a specific output, your success rate is largely determined by how specific and comprehensive the context the LLM has access to is.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: