Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This seems very incorrect to me. Human teachers make mistakes, or understand details of something incorrectly sometimes, but it doesn't come up in every lesson. It only happens occasionally, and sometimes a student will correct them/challenge them on it.

ChatGPT makes mistakes literally every time I use it (in a domain that I'm knowledgable in). How is that the same thing? Being given incorrect information is worse than not having the knowledge at all IMO.



Do you mean GPT-3.5 (free) or GPT-4 (paid)? Their performance and hallucination rates are very different.

GPT-4o (the best current model) is now becoming available to free, registered users now. What do you think of it?

Unless your domain is very specialized, I think GPT-4T, GPT-4o, and Claude 3 Opus for example are quite good.


I use GPT 4, and it still constantly invents things and presents them to me with authority. I haven’t tried GPT-4o yet.


I do find that current LLMs are quite bad at design problems and answering very specific questions for which they may lack sufficient training data. I like them for general Q&A though.

A different architecture or an additional component might be needed for them to generalize better for out-of-training-distribution questions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: