Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Right, anthropomorphization is the biggest source of confusion here. An LLM gives you a perfect answer to a complex question and you think wow, it really "understood" my question.

But no! It doesn't understand, it doesn't reason, these are concepts wholly absent from its fundamental design. It can do really cool things despite the fact that it's essentially just a text generator. But there's a ceiling to what can be accomplished with that approach.



It's presented as a feature when GPT provides a correct answer.

It's presented as a limitation when GPT provides an incorrect answer.

Both of these behaviors are literally the same. We are sorting them into the subjective categories of "right" and "wrong" after the fact.

GPT is fundamentally incapable of modeling that difference. A "right answer" is every bit as valid as a "wrong answer". The two are equivalent in what GPT is modeling.

Lies are a valid feature of language. They are shaped the same as truths.

The only way to resolve this problem is brute force: provide every unique construction of a question, and the corresponding correct answer to that construction.


Not entirely. It's modeling a completion in a given context. That language model "understands" that if one party stops speaking, the other party generally starts, etc. It also "understands" that if someone says something 'wrong' the other party often mentions it, which makes the first party respond thusly, and so forth.

If you ask it what the outcome of a lie is on the conversation it can generally answer. If you ask it for a sample conversation where someone is factually incorrect, or lying, and caught out, it can generate it.

If you give it a fact and ask it to lie about that fact, it will.


It doesn't "understand" those interactions: it models them to be repeated later.

But the way it chooses what interaction to repeat is not by logical decision: it's by semantic distance.

The meaning of semantic distance usually leads to the correct path, but sometimes that pattern is ambiguous.


I'd agree it doesn't understand anything, but I think it does "understand" things. And yes, it's a language model so semantic distance and other textual details are all it has to go by.

> not by logical decision

Almost entirely yes, but you can have it textually model logic analysis and then check that own model itself. It's not "doing logic" but it almost never fails simply exercises either.

> The meaning of semantic distance usually leads to the correct path, but sometimes that pattern is ambiguous.

Of course. But "a little knowledge is a dangerous thing" as well. Often even real knowledge and analysis leads to the wrong place. In both cases (with a junior human or a LLM as an assistant) you can model their basic processes and stack the information in such a way that their simple model will lead them to the correct place.

It may not know what a lie is, in the sense of having felt the need to hide the truth to avoid personal punishment, but it certainly "knows" what one is and how it shapes the conversation for the purposes of writing a lie, writing a response to a lie, detecting potential lies, etc.


How do we define "understanding" though? Because you can add to your prompt: "Do it step by step, showing how you come up with the solution" and it will simplify the original complex problem into multiple smaller pieces. It's clearly doing something.

For example, I just asked bing: What does the following expression equal? Show your work, step by step: 4(13+2)7-1(4+3+9)+7^(2(2+1))

And bing responded with:

"To solve a math expression, you need to follow the order of operations: parentheses, exponents, multiplication/division, and addition/subtraction (PEMDAS).1 Here are the steps to solve your expression:

4(13+2)7-1(4+3+9)+7^(2(2+1))

Step 1: Simplify inside the parentheses first.

4(15)*7-1(16)+7^(6)

Step 2: Apply the exponents next.

420-16+117649

Step 3: Add or subtract from left to right.

118053

The final answer is 118053. I hope this helps. "

I wrote the silly expression randomly. I'm not saying it's "thinking" or "understanding", but I wouldn't expect a text-generator to be able to deduce the problem down like that.


It's following an example story that it has read.

> To solve a math expression, you need to follow the order of operations: parentheses, exponents, multiplication/division, and addition/subtraction (PEMDAS).1 Here are the steps to solve your expression:

It isn't actually thinking about any of that statement. That's just boilerplate that goes at the beginning of this story. It's what bing is familiar seeing as a continuation to your prompt, "show your work, step by step".

It gets more complicated when it shows addition being correctly simplified, but that behavior is still present in the examples in its training corpus.

---

The thinking and understanding happened when the first person wrote the original story. It also happened when people provided examples of arithmetic expressions being simplified, though I suspect bing has some extra behavior inserted here.

All the thought and meaning people put into text gets organized into patterns. LLMs find a prompt in the patterns they modeled, and "continues" the patterns. We find meaning correctly organized in the result. That's the whole story.


Wolfram alpha can solve mathematical expressions like this as well, for what it's worth, and it's been around for a decent amount of time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: