Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Exactly! All LLMs do is “hallucinate”. Sometimes the output happens to be right, same as a broken clock.


I find this way of looking at LLMs to be odd. Surely we all are aware that AI has always been probabilistic in nature. Very few people seem to go around talking about how their binary classifier is always hallucinating, but just sometimes happens to be right.

Just like every other form of ML we've come up with, LLMs are imperfect. They get things wrong. This is more of an indictment of yeeting a pure AI chat interface in front of a consumer than it is an indictment of the underlying technology itself. LLMs are incredibly good at doing some things. They are less good at other things.

There are ways to use them effectively, and there are bad ways to use them. Just like every other tool.


The problem is they are being sold as everything solutions. Never write code / google search / talk to a lawyer / talk to a human / be lonely again, all here, under one roof. If LLM marketing was staying in its lane as a creator of convincing text we'd be fine.


This happens with every hype cycle. Some people fully buy into the most extreme of the hype, and other people reverse polarize against that. The first group ends up offsides because nothing is ever as good as the hype, but the second group often misses the forest for the trees.

There's no shortcut to figuring out what the truth of what a new technology is actually useful for. It's very rarely the case that either "everything" or "nothing" is the truth.


I think a lot of problems will be solved by explicitly training on high quality content and probably injecting some expert knowledge in addition


Yeah but that's not easy, which is why it wasn't done in any of the cases where it's needed.


>I find this way of looking at LLMs to be odd.

It's not about it being perfect or not. It's about how they come about with the responses they do.

>Very few people seem to go around talking about how their binary classifier is always hallucinating, but just sometimes happens to be right.

Yeah, but no one is anthropomorphizing binary classifiers.


You imply that, like a stopped clock, LLMs are only right occasionally and randomly. Which is just nonsense.


Although I get what you're saying, it's still true that if something is wrong randomly at any point, it is always "randomly wrong".


It's true, though. It strings together plausible words using a statistical model. If those words happen to mean something, it's by chance.


Sure, but that chance might be 99.7%. 'Random' isn't a pejorative.


Same is true of humans fwiw.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: