They weren't true in past iterations. Since the new version is 10x as accurate (if you believe the test score measures, going from bottom 10% score to top 10%), we're going to see a lot less confident falseness as the tech improves.
I don't think ChatGPT should be trusted at all until it can tell you roughly how certain it is about an answer, and that this self-reported confidence roughly correponds to how well it will do on a test in that subject.
I don't mind it giving me a wrong answer. What's really bad is confidently giving the wrong answer. If a human replied, they'd say something like "I'm not sure, but if I remember correctly..", or "I would guess that..."
I think the problem is they've trained ChatGPT to respond condidently as long as it has a rough idea about what the answer could be. The AI doesn't get "rewarded" for saying "I don't know".
I'm sure the data about the confidence is there somewhere in the neural net, so they probably just need to somehow train it to present that data in its response.