My experience is that LLMs regress to the average of the context they have for the task at hand.
If you're getting average results you most likely haven't given it enough details about what you're looking for.
The same largely applies to hallucinations. In my experience LLMs hallucinate significantly more when at or pushed to exceed the limits of their context.
So if you're looking to get a specific output, your success rate is largely determined by how specific and comprehensive the context the LLM has access to is.
If you're getting average results you most likely haven't given it enough details about what you're looking for.
The same largely applies to hallucinations. In my experience LLMs hallucinate significantly more when at or pushed to exceed the limits of their context.
So if you're looking to get a specific output, your success rate is largely determined by how specific and comprehensive the context the LLM has access to is.