Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I did this today and Grok led me to make an embarrassingly wrong comment (Grok stated "Rust and Java, like C/C++ mentioned by Sebastian Aaltonen, leave argument evaluation order unspecified" - I now know this is wrong and both are strict left-to-right). ChatGPT gets it correct. But I think we're still in the "check the references" stage.


Asking AI is like phoning a friend and catching them out at a bar. Maybe they’re sober enough to give good info, but don’t take it as gospel.


I plugged in 5 books I've recently enjoyed and asked ChatGPT for some recommendations of similar books, and 2 out of 10 of the books it suggested did not exist. I googled the author, the book title, they were complete fakes. They sounded like they'd be good books though! When I told it they didn't exist it was like, "you're right, thank you for catching that!"


It would have replied the same if you claimed of existing books that they don't exist :)


True. I use Kagi which supports its facts with citations. More than once I have read the cited material to find no trace of the so called facts.


Yeah I don't know how good citations are always... i mean i just read without a doubt the earth is flat [1]. I can now say for certain and without any sarcasm, this is actually fact. Because of my citation it must be true?

[1] https://thinkmagazine.mt/the-earth-is-flat/

The point here is information is hard... unless you do your own thinking, your own testing, you can't be sure. But I agree references are nice.


If the only thing it did was give me references that's already a leg up




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: