Hacker News new | past | comments | ask | show | jobs | submit login

[flagged]



> when you drill down to first principles

It is a trap to consider that first principles perspective is sufficient. Like, if I tell you before 1980: "iterate over f(z) = z^2 + c" there is no way you are going to guess fractals emerge. Same with the rules for Conway's Game of Life - seeing the code you won't guess it makes gliders and guns.

My point is that recursion creates its own inner opacity, it is irreducible, so knowing a recursive system behavior from its iteration code alone is insufficient. It is a becoming not a static thing. That's also why we have the halting problem - recursion.

Reasoning is a recursive process too, you can't understand it from analyzing its parts.


What with this dingus coming at me with recursion? Hey, that's my move, bro. Seriously though, everything you said is totally unnecessary. All I'm saying is, in order to refute the claim that LLMs can't reason, you ought to have your own coherent definition of the word in order to falsify the claim. Recursion is a coherent notion. If you want to include that in the definition, have at it. That actually supports my broader view of reasoning.

Your tone is a bit odd, but I agree with you on this 100%.

Without properly defining what thinking is and what is not, you can't discuss it's properties, let alone discuss which entity manifests it or not.


On the other hand: nobody can formally refute that a stone is reasoning/thinking.

Lack of definition seems to be ideal substrate for marketing and hype.


My tone is odd? Never heard that before in my life. Definitely wasn't an problem for me growing up.

Equivalent: "I'm so sick of these [atheist] cretins proclaiming that [god doesn't exist], and when you drill down to first principles, these same people do not have a rigorous definition of [god] in the first place!"

That's nonsense, because the people obligated to furnish a "rigorous definition" are the people who make the positive claim that something specific is happening.

Also, the extraordinary claims are the ones that require extraordinary evidence, not the other way around.


Your god analogy is clumsy in this case. We aren't talking about something fantastical here. Reasoning is not difficult to define. We can go down that road if you'd like. Rather, the problem is, once you do define it, you will quickly find that LLMs are capable of it. And that makes human exceptionalists a bit uncomfortable.

"It's just statistics!", said the statistical-search survival-selected statistically-driven-learning biological Rube Goldberg contraption, between methane emissions. Imagining that a gradient learning algorithm, optimizing its statistical performance, is the same thing as only learning statistical relationships.

Inconvertibly demonstrating a dramatic failure in its, and many of its kind's, ability to reason.


Reasoning exists on a spectrum, not as a binary property. I'm not claiming that LLMs reason identically to humans in all contexts.

You act as if statistical processes can’t ever scale into reasoning, despite the fact that humans themselves are gradient-trained statistical learners over evolutionary and developmental timescales.


> cretins proclaiming that LLMs aren't truly capable of reasoning

> Reasoning is not difficult to define

> Reasoning exists on a spectrum

> statistical processes [can] scale into reasoning

It seems like quite a descent here, starting with the lofty heights of condemning skeptics as "cretins" and insisting the definition is easy... down to what sounds like the introduction to a flavor of panpsychism [0], where even water flowing downhill is a "statistical process" which at enough scale would be "reasoning".

I don't think that's a faithful match to what other people mean [1] when they argue LLMs don't "reason."

[0] https://en.wikipedia.org/wiki/Panpsychism

[1] https://en.wikipedia.org/wiki/Motte-and-bailey_fallacy


You think because reasoning exists on a spectrum, therefore everything is conscious? You sound a bit too desperate to try to prove me wrong. Also you edited your comment post hoc to add sources. Who cares?

I think you might re-read what I wrote.

I very strongly agree with you. :)


Well, at least someone does.

> Rather, the problem is, once you do define it, you will quickly find that LLMs are capable of it.

That’s really not what’s happening though. People who claim LLMs can do X and Y often don’t even understand how LLMs work. The opposite is also true. They just open a prompt and get an output and shout Eureka. Of course not everyone is like this, but majority are. It’s similar to what we think about thinking itself. You read these comments and everyone is an expert on human brain and how it works. It’s fascinating.


Notice how nearly every comment is just dancing around the issue and nitpicking instead of just owning up to whether or not they think LLMs are capable of reasoning.

I wish that was the case. I see most people, who believe to be experts, already made up their minds and being pretty sure about it.

I personally believe that not being sure about something, esp. on a topic as complicated and as this one, remaining open to different possibilities, and having a bit of skepticism, is healthy.


There's skepticism and there's talking around the actual subject. The latter contributes nothing.

Hypothetically.

Imagine a simple, but humongous hash table, that maps every possible prompt of 20000 letters to the most appropriate answer in current time.

(How? Let’s say outsourcing to east asia or aliens…)

Would you say such mechanism is doing reasoning?


Let's outsource it to your mothers house instead.

Let’s do that. Still, this hash table, would be doing reasoning?

The premise of your question is incredibly dumb

Humans are exceptional though.

Oh my sweet summer child



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: