Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well, that tells you a lot about:

1. The people designing it (either optimists or looking for a quick exit).

2. The learning set they're using, which I believe is some kind of internet crawl of sorts? I imagine humanity, as a whole, bullshits its way through most of its life.



It makes sense that there wouldn't be a lot of training data that consists of people admitting they don't know - forums like stack overflow wouldn't even permit such a post/comment anyway. So presumably ChatGPT is very heavily weighted against giving "I don't know" as an answer when virtually all the examples of people asking questions in its training data are followed by plausible sounding answers.


It is almost certainly case 2. People don't often write text on the internet just to say that they don't know. It doesn't even make sense to post that most of the time.

A great example would be on a Q/A forum or something like Stackoverflow. It better to let someone else answer when you don't know.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: