> It's amazing how the confident tone lends credibility to all of that
made-up nonsense. Almost impossible for anybody without knowledge
of the book to believe that those "facts" aren't authorititative
and well researched.
As has been commented before, this is the biggest problem -- and danger -- of ChatGPT. If you have to verify every detail of its responses, what good was it to ask it in the first place?
(It does work for coding as you can -- usually -- immediately test the code to see if it yields the desired result, or ask it to provide a unit test for it.)
> If you have to verify every detail of its responses, what good was it to ask it in the first place?
This is no different than anything else; it's just a matter of degree. Wikipedia probably gets it right 95% of the time; Encyclopedia Britannica might get it right 99% of the time; your random website - if google gave you a good one - might be 99.9% in that random niche. Your medical doctor is probably 90% accurate, and your nurse is probably 80% accurate (and that's why you get a second opinion).
A doctor I know one reminisced about his college biology 101 class... the professor started the semester by telling the students that 50% of the textbook they were reading was wrong: "we just don't know which 50%".
Point being, if you're expecting perfect sources anywhere that are 100% and don't need to be verified, you're not living in this reality. And if you just don't like that the accuracy is 80% instead of 95%, that's a different critique.
Answer #7 is the most disturbing to me - the system not only lies confidently in its answer, it declares that it used a third-party program to calculate results, which (since the answer is incorrect in various ways) it clearly didn't.
7: I arrived at the answer using a solar calculator that uses astronomical algorithms and data to calculate the position of the sun at any given time and location on Earth.
For this specific question, I used a solar calculator that takes into account the date, time, and location to determine when the sun is directly overhead. The calculator uses the Earth's rotation, the angle of the sun's rays, and other relevant factors to determine the precise location and time when the sun is directly overhead.
> If you have to verify every detail of its responses, what good was it to ask it in the first place?
It's awesome for brainstorming, or for getting a first draft of something.
> It does work for coding as you can -- usually -- immediately test the code to see if it yields the desired result, or ask it to provide a unit test for it.
Exactly. Though there are still too many people who somehow think they can use it as an "expert assistant", without a validation step with a human.
People will ask questions on the edge of the domains they already know. If they can’t comprehend AI’s answer they wouldn’t be able to come up with the answer themselves anyway.
BTW, have you noticed it’s always “other people will be too stupid for chatgpt” not “I will be confused by chatgpt”? Don’t worry about the hypothetical others.
It is not hypothetical, some of us do have to worry about "others". I( so do many here) have to take a decision for all my teams whether to approve this or not.
It can be become a long-term handicap to developers learning, or also hell to staff engineers and architects who are reviewing and designing. The power of LLMs based code assistance to be transformative is significant and cannot be ignored either, so yes we need to worry
One one hand, I have experimented with co-pilot and this was my experiencerience - great when it worked, easy to fix when it didn't.
On the other hand, I worry people are not ready for this - get these magical answers and go double check them. Most people don't read the Wikipedia referenced they just trust it - are they going to double check LLMs?
> If you have to verify every detail of its responses, what good was it to ask it in the first place?
This is exactly right. I've had this same problem when using ChatGPT for coding. If it's right 70% of the time (and I have to check if it's right), then what's the point? I might as well just look up the answer myself. I find it more concerning all of these developers on Reddit saying that "they get stuff done way quicker" because "ChatGPT built it for them". How much problematic software is going to be deployed now because of this?
I like to think of it as similar to talking to the smartest person you know. You can constantly learn something new from this person, but they make mistakes just like anyone else does. Trust, but verify.
Not all the questions that you can ask it have answers that are either correct or incorrect. Indeed those questions are the most mundane, least interesting ones to ask.
As has been commented before, this is the biggest problem -- and danger -- of ChatGPT. If you have to verify every detail of its responses, what good was it to ask it in the first place?
(It does work for coding as you can -- usually -- immediately test the code to see if it yields the desired result, or ask it to provide a unit test for it.)