Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The reason the LLMs are of any use to anyone right now (and real people are using them for real things right now- see the millions of ChatGPT users) is because the qualitative difference between text created by a real human using a guessing heuristic vs. an LLM using statistics is qualitatively the same. Even for things that some subjectively deem "creative".

The entropy of communication also makes it so that we mostly won't ever know when a person is guessing or if they think they're telling the truth. In that sense it makes less difference to the receiver of the information what the intent was- even if it came from a human, that human's guessing/ BS level is still unknown to you, the recipient.

This difference will continue to get smaller and more imperceptible. When it will stop changing or at what rate it will change is anyone's guess.



LLMs are just trying to mimic (predict) human output, and can obviously do a great job, which is why they are useful.

I was just referring to when LLMs fail, which can be in non-human ways, not only the way in which they hallucinate, but also when they generate output that has the "shape" of something in the training set, but is nonsense.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: