I find GPT-4 to be very useful almost daily. I can often spot hallucinations quickly, and they are otherwise easy enough to verify. If I can get a single new perspective or piece of relevant information from an interaction with it, then that is very valuable.
It would be significantly more useful if it were more grounded in reality though… I agree with you there.
How do you know you spot the hallucinations, and that you're not just catching the less-good ones while accepting convincing half-truths? It may be that your subject is just that clear-cut, and you've been careful — but what I worry about is that people won't be, and will just accept the pretty-much correct details that don't really matter that much, until they accrete into a mass of false knowledge, like the authoritative errors quoted in Isadore of Seville's Encyclopedia and similar medieval works.
I think it's enormously useful as a tool paired with a human who has decent judgment. I think it would be useless on its own. I'm constantly impressed by how useful it is, but I'm also constantly mystified by people who claim to be getting this feeling of talking to a "real" intelligence; it doesn't feel that way to me at all.
On the contrary, the "hallucinations" are often very hard to spot without expert knowledge. The output is often plausible but wrong, as shown by Knuth's questions.
It would be significantly more useful if it were more grounded in reality though… I agree with you there.