Hacker News new | past | comments | ask | show | jobs | submit login

> One of the main reason LLMs are unintuitive and difficult to use is that you have to learn how to get useful results out of fundamentally unreliable technology.

Name literally any other technology that works this way.

> Guide dogs, sniffer dogs, falconry...

Guide dogs are an imperfect solution to an actual problem: the inability for people to see. And dogs respond to training far more reliably than LLMs respond to prompts.

Sniffer dogs are at least in part bullshit and have been shown in many studies to respond to the subtle cues of their handlers far more reliably than anything they actually smell. And the best of part of them is they also (completely outside their own control mind you) ruin lives when falsely detecting drugs on cars that look a way the officer handling them thinks means they have drugs inside.

And falconry is a hobby.




"Name literally any other technology that works this way"

Since you don't like my animal examples, how about power tools? Chainsaws, table saws, lathes... all examples of tools where you have to learn how to use them before they'll be useful to you.

(My inability to come up with an analogy you find convincing shouldn't invalidate my claim that "LLMs are unreliable technology that is still useful if you learn how to work with it" - maybe this is the first time that's ever been true for an unreliable technology, though I find that doubtful.)


The correct name for unreliable power tools is "trash".


which happens to be the correct name for A"I" too


> Name literally any other technology that works this way.

The internet for one.

Not the internet itself (although it certainly can be unreliable), but rather the information on it.

Which I think is more relevant to the argument anyway, as LLM’s do in fact reliably function exactly the way they were built to.

Information on the internet is inherently unreliable. It’s only when you consider externalities (like reputation of source) that its information can then be made “reliable”.

Information that comes out of LLM’s is inherently unreliable. It’s only through externalities (such as online research) that its information can be made reliable.

Unless you can invent a truth machine that somehow can tell truth from fiction, I don’t see either of these things becoming reliable, stand-alone sources of information.


> Name literally any other technology that works this way.

Probabilistic prime number tests.

I'm being slightly facetious. Such tests differ from LLMs in the crucial respect that we can quantify their probability of failure. And personally I'm quite skeptical of LLMs myself. Nevertheless, there are techniques that can help us use unreliable tools in reliable ways.


> Name literally any other technology that works this way.

How about people? They make mistakes all the time, disobey instructions, don’t show up to work, occasionally attempt to embezzle or sabotage their employers. Yet we manage to build huge successful companies out of them.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: