This seems very incorrect to me. Human teachers make mistakes, or understand details of something incorrectly sometimes, but it doesn't come up in every lesson. It only happens occasionally, and sometimes a student will correct them/challenge them on it.
ChatGPT makes mistakes literally every time I use it (in a domain that I'm knowledgable in). How is that the same thing? Being given incorrect information is worse than not having the knowledge at all IMO.
I do find that current LLMs are quite bad at design problems and answering very specific questions for which they may lack sufficient training data. I like them for general Q&A though.
A different architecture or an additional component might be needed for them to generalize better for out-of-training-distribution questions.
ChatGPT makes mistakes literally every time I use it (in a domain that I'm knowledgable in). How is that the same thing? Being given incorrect information is worse than not having the knowledge at all IMO.