Hacker News new | past | comments | ask | show | jobs | submit login

It can’t replace a human for support, it is not even close to replacing a junior developer. It can’t replace any advice job because it lies instead of erroring.

As an example if you want diet advice, it can lie to you very convincingly so there is no point in getting advice from it.

Main value you get from a programmer is they understand what they are doing and they can take the responsibility of what they are developing. Very junior developers are hired mostly as an investment so they become productive and stay with the company. AI might help with some of this but doesn’t really replace anyone in the process.

For support, there is massive value in talking to another human and having them trying to solve your issue. LLMs don’t feel much better than the hardcoded menu style auto support there already is.

I find it useful for some coding tasks but think LLMs were overestimated and it will blow up like NFTs




I agree with most of your points but this one

>I find it useful for some coding tasks but think LLMs were overestimated and it will blow up like NFTs

No way. NFTs did not make any headway in "the real world": their value proposition was that their cash value was speculative, like most other Blockchain technologies, and that understandably collapsed quickly and brilliantly. Right now developers are using LLMs and they have real tangible advantages. They are more successful than NFTs already.

I'm a huge AI skeptic and I believe it's difficult to measure their usefulness while we're still in a hype bubble but I am using them every day, they don't write my prod code because they're too unreliable and sloppy, but for one shot scripts <100 lines they have saved me hours, and they've entirely replaced stack overflow for me. If the hype bubble burst today I'd still be using LLMs tomorrow. Cannot say the same for NFTs


LLMs are somewhat useful compared to NFTs and other blockchain bullshit which is nearly completely useless. It will be interesting what happens when the money from the investment bubble dries out and the real costs need to be paid by the users.


Bitcoin has been bootstrapped into usefulness through the durability of the belief many have had in it, especially as people begin to lose confidence in the dollar and US equities.


> As an example if you want diet advice, it can lie to you very convincingly so there is no point in getting advice from it.

How exactly is this different from getting advice from someone who acts confidently knowledgeable? Diet advice is an especially egregious example, since I can have 40 different dieticians give me 72 different diet/meal plans with them saying 100% certainty that this is the correct one.

It's bad enough the AI marketers push AI as some all knowing, correct oracle, but when the anti-ai people use that as the basis for their arguments, it's somehow more annoying.

Trust but verify is still a good rule here, no matter the source, human or otherwise.


If a junior developer lies about something important, they can be fired and you can try to find someone else who wouldn't do the same thing. At the very least you could warn the person not to lie again or they're gone. It's not clear that you can do the same thing with an LLM as they don't know they've lied.


You're falling into the mistake of "correct" or "lied" though. Being wrong isn't lying.


Inventing answers is lying

If I ask it how to accomplish a task with the C standard library and it tells me to use a function that doesn't exist in the C standard library, that's not just "wrong" that is a fabrication. It is a lie


Lying requires intent to deceive.

If you ask me to remove whitespace from a string in Python and I mistakenly tell you use ".trim()" (the Java method, a mistake I've made annoyingly too much) instead of ".strip()", am I lying to you?

It's not a lie. It's just wrong.


You are correct that there is a difference between lying and making a mistake, however

> Lying requires intent to deceive

LLMs do have an intent to deceive, built in!

They have been built to never admit they don't know an answer, so they will invent answers based on faulty premises

I agree that for a human mixing up ".trim()" and ".strip()" is an honest mistake

In the example I gave you are asking for a function that does not exist. If it invents a function, because it is designed to never say "you are wrong that doesn't exist" or "I don't know the answer" that seems to qualify to me as "intent to deceive" because it is designed to invent something rather than give you a negative sounding answer


An LLM is not "just wrong" either. It's just bullshit.

The bullshitter doesn't care about if what they say is true or false or right or wrong. They just put out more bullshit.


OK, perhaps lying is the wrong word but someone who repeatedly fabricated information would be treated the same as a liar in most contexts.


which is interesting because AI doesn't have intent and there is incapable of lying.


Of course it has intent. It was literally designed to never say "I don't know" and to instead give what ever string of words best fits the patter. That's intent. It was designed with the intent to deceive rather than to offer any confidence levels or caveats. That's lying.


It’s more like bullshitting which is inbetween the two. Basically, like that guy who always has some story to tell. He’s not lying as such, he’s just waffling.


In the case of dieticians, investment advisors, and accountants they are usually licensed professionals who face consequences for misconduct. LLMs don’t have malpractice insurance


Good luck getting any of that to happen. All that does is raise the barrier for proof and consequence, because they've got accreditation and "licensing bodies" with their own opaque rules and processes. Accreditation makes it seem like these people are held to some amazing standard with harsh penalties if they don't comply, but really they just add layers of abstraction and places for incompetence, malice and power-tripping to hide.

E.g. Next time a lawyer abandons your civil case and ghosts you after being clearly negligent and down-right bad in their representation. Good luck holding them accountable with any body without consequences.


In that scenario the lawyer would be suspended for at least 6 months or even lose their license...

The CA Bar disciplined over a hundred lawyers (including some now former lawyers) last month alone.

The client doesn't need another lawyer to make this happen. They just need to complain to the licensing body (the State Bar Association).


>E.g. Next time a lawyer abandons your civil case and ghosts you after being clearly negligent and down-right bad in their representation. Good luck holding them accountable with any body without consequences.

Are you talking about a personal experience? I'd think a malpractice claim and the state bar would help you out. Did you even try? Are you just making something up?


" since I can have 40 different dieticians give me 72 different diet/meal plans with them saying 100% certainty that this is the correct one."

Because, as Brad Pilon of intermittent fasting fashion repeatedly stresses, "All diets work."*

* Once there is an energy deficit.


OT but funny: I see a YouTube video with a lot of before and after photos where the coach guarantees results in 60 days. It was entirely focused on avoiding stress and strongly advised against caloric restriction. Something like sleeping is many times more important than exercise and exercise is many times more important than diet.

From what I know dieticians don't design exercise plans. (If true) the LLM has better odds to figure it out.


I would not say all of them but in general I agree, there is not one correct one but many correct ones.


Do people actually behave this way with you? If someone presents a plan confidently without explaining why, I tend to trust them less (even people like doctors, who just happen to start with a very high reputation). In my experience people are very forthcoming with things they don't know.


Someone can present a plan, explain that plan, and be completely wrong.

People are forthcoming with things they know they don't know. It's the stuff that they don't know that they don't know that get them. And also the things they think they know, but are wrong about. This may come as a shock, but people do make mistakes.


And if someone presents a plan, explains that plan, and is completely wrong repeatedly and often, in a way that makes it seem like they don’t even have any concept whatsoever of what they may have done wrong, wouldn’t you start to consider at some point that maybe this person is not a reliable source of information?


> Trust but verify is still a good rule here

I wouldn't have a clue how to verify most things that get thrown around these days. How can I verify climate science? I just have to trust the scientific consensus (and I do). But some people refuse to trust that consensus, and they think that by reading some convincing sounding alternative sources they've verified that the majority view on climate science is wrong.

The same can apply for almost anything. How can I verify dietary studies? Just having the ability to read scientific studies and spot any flaws requires knowledge that only maybe 1 in 10000 people could do, if not worse than that.


Ironic, but keep asking LLMs till you can connect it to your "known truth" knowledge. For many topics I spend ~15-60 mins on various topics asking for details, questioning any contradictory answers, verifying assumptions to get what feels right answer. I talked with them for topics varying from democracy-economy, irrational number proofs and understanding rainbows.


I trust cutting edge models now far more than the ones from a few years ago.

People talk a lot of about false info and hallucinations, which the models do in fact do, but the examples of this have become more and more far flung for SOTA models. It seems that now in order to elicit bad information, you pretty much have to write out a carefully crafted trick question or ask about a topic so on the fringes of knowledge that it basically is only a handful of papers in the training set.

However, asking "I am sensitive to sugar, make me a meal plan for the week targeting 2000cal/day and high protein with minimally processed foods" I would totally trust the output to be on equal footing with a run of the mill registered dietician.

As for the junior developer thing, my company has already forgone paid software solutions in order to use software written by LLMs. We are not a tech company, just old school manufacturing.


I get wrong answers for basic things like how to fill out a government form or the relationship between two distant historical figures, things I'm actually working on directly and not some "trick" to get the machine to screw up. They get a lot right a lot of the time, but they're inherently untrustworthy because they sometimes get things subtly or catastrophically wrong and without some kind of consistent confidence scoring, there's no way to tell the difference without further research, and almost necessarily on some other tool because LLMs like to hold onto their lies and it's very difficult to convince them to discard a hallucination.


> It can’t replace a human for support,

It doesn't have to. It can replace having no support at all.

It would be possible to run a helpdesk for a free product. It might suck but it could be great if you are stuck.

Support call centers usually work in layers. Someone to pick up the phone who started 2 days ago and knows nothing. They forward the call to someone who managed to survive for 3 weeks. Eventually you get to talk to someone who knows something but can't make decisions.

It might take 45 minutes before you get to talk to only the first helper. Before you penetrate deep enough to get real support you might lose an hour or two. The LLM can answer instantly and do better than tortured minimum wage employees who know nothing.

There may be large waves of similar questions if someone or something screwed up. The LLM can do that.

The really exciting stuff will come where the LLM can instantly read your account history and has a good idea what you want to ask before you do. It can answer questions you didn't think to ask.

This is specially great if you've had countless email exchanges with miles of text repeating the same thing over and over. The employee can't read 50 pages just to get up to speed on the issue, if they had the time you don't so you explain again for the 5th time that delivery should be on adress B not A and be on these days between these times unless it are type FOO orders.

Stuff that would be obvious and easy if they made actual money.


NFT:s never had any real value. It was just speculation hoping some bigger sucker will come after you.

LLM:s create real value. I save a bunch of time coding with an LLM vs without one. Is it perfect? No, but it does not have to be for still creating a lot of value.

Are some people hyping it up too much? Sure, an reality will set in but it wont blow up. It will rather be like the internet. 2000s and everyone thought "slap some internet on it and everything will be solved". They overestimated the (shorterm) value of the internet. But internet was still useful.


NFTs weren't a trillion dollar black hole that's yet to come close to providing value anywhere near that investment level. Come back when AI companies are actually profitable. Until then, LLM AI value is negative, and if the companies can't turn that around, they'll be as dead as NFTs and you won't even get the heavily subsidized company killing free of cheap features you think are solid.


> It can’t replace a human for support

But it is replacing it. There's a rapidly-growing number of large, publicly-traded companies that replaced first-line support with LLMs. When I did my taxes, "talk to a person" was replaced with "talk to a chatbot". Airlines use them, telcos use them, social media platforms use them.

I suspect what you're missing here is that LLMs here aren't replacing some Platonic ideal of CS. Even bad customer support is very expensive. Chatbots are still a lot cheaper than hundreds of outsourced call center people following a rigid script. And frankly, they probably make fewer mistakes.

> and it will blow up like NFTs

We're probably in a valuation bubble, but it's pretty unlikely that the correct price is zero.


> I find it useful for some coding tasks but think LLMs were overestimated and it will blow up like NFTs

Can't disagree more (on LLMs. NFTs are of course rubbish). I'm using them with all kinds of coding tasks with good success, and it's getting better every week. Also created a lot of documents using them, describing APIs, architecture, processes and many more.

Lately working on creating an MCP for an internal mid-sized API of a task management suite that manages a couple hundred people. I wasn't sure about the promise of AI handling your own data until starting this project, now I'm pretty sure it will handle most of the personal computing tasks in the future.


> It can’t replace a human for support

It doesn’t wholly replace the need for human support agents but if it can adequately handle a substantial number of tickets that’s enough to reduce headcount.

A huge percentage of problems raised in customer support are solved by otherwise accessible resources that the user hasn’t found. And AI agents are sophisticated enough to actually action on a lot of issues that require action.

The good news is that this means human agents can focus on the actually hard problems when they’re not consumed by as much menial bullshit. The bad news for human agents is that with half the workload we’ll probably hit an equilibrium with a lot fewer people in support.


I already know of at least one company that's pivoted to using a mix of AI and off-shoring their support, as well as some other functions; that's underway, with results unclear, aside from layoffs that took place. There was also a brouhaha a year or two ago when a mental health advocacy tried using AI to replace their support team... did not go as planned when it suggested self-harm to some users.


LLM is already very useful for a lot of tasks. NFT and most other crypto has never been useful for anything other than speculation.


> As an example if you want diet advice, it can lie to you very convincingly so there is no point in getting advice from it.

Have you somehow managed to avoid the last several decades of human-sourced dieting advice?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: