Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I recently (in the last month) asked ChatGPT to cite its sources for some scientific data. It gave me completely made up, entirely fabricated citations for academic papers that did not exist.


Did the model search the internet?

The behavior you're describing sounds like an older model behavior. When I ask for links to references these days, it searches the internet the gives me links to real papers that are often actually relevant and helpful.


I don’t recall that it ever mentioned if it did or not. I don’t have the search on hand but from my browser history I did the prompt engineering on 11/18 (which perhaps there is a new model since then?).

I actually repeated the prompt just now and it actually gave me the correct, opposite response. For those curious, I asked ChatGPT what turned on a gene, and it said Protein X turns on Gene Y as per -fake citation-. Asking today if Protein X turns on Gene Y ChatGPT said there is no evidence, and showed 2 real citations of factors that may turn on Gene Y.

Pretty impressed!


Share a link to the conversation.


Here you go: https://chatgpt.com/share/6754df02-95a8-8002-bc8b-59da11d276...

ChatGPT regularly searches and links to sources.


I was asking for a link to the conversation from the person I was replying to.


What a bizarre thing to request. Do you go around accusing everyone of lying?


So sorry to offend your delicate sensibilities by calling out a blatant lie from someone completely unrelated to yourself. Pretty bizarre behavior in itself to do so.


Except there are news stories of this happening to people


I suspect there being a shred of plausibility is why there’s so many people lying about it for attention.

It’s as simple as copying and pasting a link to prove it. If it is actually happening, it would benefit us all to know the facts surrounding it.


sure, here's a link of a conversation from today 12/9/24 which has multiple incorrect: references, links, papers, journal titles, DOIs, and authors.

https://chatgpt.com/share/6757804f-3a6c-800b-b48c-ffbf144d73...

as just another example, chatgpt said in the Okita paper that they switched media on day 3, when if you read the paper they switched the media on day 8. so not only did it fail to generate the correct reference, it also failed to accurately interpret the contents of a specific paper.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: