> As an example if you want diet advice, it can lie to you very convincingly so there is no point in getting advice from it.
How exactly is this different from getting advice from someone who acts confidently knowledgeable? Diet advice is an especially egregious example, since I can have 40 different dieticians give me 72 different diet/meal plans with them saying 100% certainty that this is the correct one.
It's bad enough the AI marketers push AI as some all knowing, correct oracle, but when the anti-ai people use that as the basis for their arguments, it's somehow more annoying.
Trust but verify is still a good rule here, no matter the source, human or otherwise.
If a junior developer lies about something important, they can be fired and you can try to find someone else who wouldn't do the same thing. At the very least you could warn the person not to lie again or they're gone. It's not clear that you can do the same thing with an LLM as they don't know they've lied.
If I ask it how to accomplish a task with the C standard library and it tells me to use a function that doesn't exist in the C standard library, that's not just "wrong" that is a fabrication. It is a lie
If you ask me to remove whitespace from a string in Python and I mistakenly tell you use ".trim()" (the Java method, a mistake I've made annoyingly too much) instead of ".strip()", am I lying to you?
You are correct that there is a difference between lying and making a mistake, however
> Lying requires intent to deceive
LLMs do have an intent to deceive, built in!
They have been built to never admit they don't know an answer, so they will invent answers based on faulty premises
I agree that for a human mixing up ".trim()" and ".strip()" is an honest mistake
In the example I gave you are asking for a function that does not exist. If it invents a function, because it is designed to never say "you are wrong that doesn't exist" or "I don't know the answer" that seems to qualify to me as "intent to deceive" because it is designed to invent something rather than give you a negative sounding answer
Of course it has intent. It was literally designed to never say "I don't know" and to instead give what ever string of words best fits the patter. That's intent. It was designed with the intent to deceive rather than to offer any confidence levels or caveats. That's lying.
It’s more like bullshitting which is inbetween the two. Basically, like that guy who always has some story to tell. He’s not lying as such, he’s just waffling.
In the case of dieticians, investment advisors, and accountants they are usually licensed professionals who face consequences for misconduct. LLMs don’t have malpractice insurance
Good luck getting any of that to happen. All that does is raise the barrier for proof and consequence, because they've got accreditation and "licensing bodies" with their own opaque rules and processes. Accreditation makes it seem like these people are held to some amazing standard with harsh penalties if they don't comply, but really they just add layers of abstraction and places for incompetence, malice and power-tripping to hide.
E.g. Next time a lawyer abandons your civil case and ghosts you after being clearly negligent and down-right bad in their representation. Good luck holding them accountable with any body without consequences.
>E.g. Next time a lawyer abandons your civil case and ghosts you after being clearly negligent and down-right bad in their representation. Good luck holding them accountable with any body without consequences.
Are you talking about a personal experience? I'd think a malpractice claim and the state bar would help you out. Did you even try? Are you just making something up?
OT but funny: I see a YouTube video with a lot of before and after photos where the coach guarantees results in 60 days. It was entirely focused on avoiding stress and strongly advised against caloric restriction. Something like sleeping is many times more important than exercise and exercise is many times more important than diet.
From what I know dieticians don't design exercise plans. (If true) the LLM has better odds to figure it out.
Do people actually behave this way with you? If someone presents a plan confidently without explaining why, I tend to trust them less (even people like doctors, who just happen to start with a very high reputation). In my experience people are very forthcoming with things they don't know.
Someone can present a plan, explain that plan, and be completely wrong.
People are forthcoming with things they know they don't know. It's the stuff that they don't know that they don't know that get them. And also the things they think they know, but are wrong about. This may come as a shock, but people do make mistakes.
And if someone presents a plan, explains that plan, and is completely wrong repeatedly and often, in a way that makes it seem like they don’t even have any concept whatsoever of what they may have done wrong, wouldn’t you start to consider at some point that maybe this person is not a reliable source of information?
I wouldn't have a clue how to verify most things that get thrown around these days. How can I verify climate science? I just have to trust the scientific consensus (and I do). But some people refuse to trust that consensus, and they think that by reading some convincing sounding alternative sources they've verified that the majority view on climate science is wrong.
The same can apply for almost anything. How can I verify dietary studies? Just having the ability to read scientific studies and spot any flaws requires knowledge that only maybe 1 in 10000 people could do, if not worse than that.
Ironic, but keep asking LLMs till you can connect it to your "known truth" knowledge. For many topics I spend ~15-60 mins on various topics asking for details, questioning any contradictory answers, verifying assumptions to get what feels right answer. I talked with them for topics varying from democracy-economy, irrational number proofs and understanding rainbows.
How exactly is this different from getting advice from someone who acts confidently knowledgeable? Diet advice is an especially egregious example, since I can have 40 different dieticians give me 72 different diet/meal plans with them saying 100% certainty that this is the correct one.
It's bad enough the AI marketers push AI as some all knowing, correct oracle, but when the anti-ai people use that as the basis for their arguments, it's somehow more annoying.
Trust but verify is still a good rule here, no matter the source, human or otherwise.