Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

[flagged]


I mean it's not like it's dangerous on its own, but if you're like "Hey GPT how do I put out a grease fire?" and it replies "Pour water on it" and you believe it then you're in for a bad time.

So I mean I guess you're technically right, it's not dangerous so long as you have 0% confidence in anything it says and consider it entertainment. But what would-be scrappy Google competitor is gonna do that?

The thing that makes it particularly insidious is that it's going to be right a lot, but being right means nothing when there's nothing to go off of to figure out what case you're in. If you actually had no idea when the Berlin Wall fell and it spit out 1987 how would you disprove it? Probably go ask a search engine.


Response from model The best way to put out a grease fire is to use a fire extinguisher or baking soda. Do not use water, as it could potentially cause the fire to spread and worsen. If the fire is too large to be extinguished by a fire extinguisher or baking soda, evacuate the area and call 911 for assistance.


I don't see the danger you are afraid of. The same artifacts you are proposing (skepticism, verification) should already be put in place with any pubic expert.


Humans will generally either provide a confidence level in their answers, or if they’re consistently wrong, you’ll learn to disregard them.

If a computer is right every time you’ve asked a question, then gives you the wrong answer in an emergency like a grease fire, it’s hard to have a defense against that.

If you were asking your best friend, you’d have some sense of how accurate they tend to be, and they’d probably say something like “if I remember correctly” or “I think” so you’ll have a warning that they could easily be wrong.


If the AI is correct 90% of the time, you can be reasonably sure it will be correct next time. That's a rational expectation. If you are at a high stake situation, then even a 1% rate of false positive is too high and you should definitely apply some verifications. Again, I don't see the danger.


Ultimately I think the danger is that the AI sounds like it knows what it’s talking about. It’s very authoritative. Anyone who presents content at that level of detail with that level of confidence will be convincing.

You can hear doubt when a presenter isn’t certain of an answer. You can see the body language. None of that is present with an AI.

And most people don’t know/care enough to do their own research (or won’t know where to find a more reliable source, or won’t have the background to evaluate the source).


> You can hear doubt when a presenter isn’t certain of an answer. You can see the body language. None of that is present with an AI.

This is not how people consume information nowadays anyways. People just watch YouTube videos where presenters don't face this kind of pressure. Or they read some text on social media from someone they like.

Anyways, we can't rely on these social tips anymore. And even if we could, they are not ideal, because they allow bullshitters to thrive, whereas modestly confident people end up ostracized.


I've been thinking more about that over the last hour or so, and I've come to the conclusion that different people have different priorities, and I don't think there's much we can do about that.

Whether it's nature, nurture, or experience, I strongly distrust people who claim to have THE answer to any complex problem, or who feel that it's better to bulldoze other people than to be wrong.

I'll listen to truth seekers, but ignore truth havers.

However, clearly that's not a universal opinion. Many people are happier believing in an authoritarian who has all the answers. And I don't think that will ever change.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: