Your god analogy is clumsy in this case. We aren't talking about something fantastical here. Reasoning is not difficult to define. We can go down that road if you'd like. Rather, the problem is, once you do define it, you will quickly find that LLMs are capable of it. And that makes human exceptionalists a bit uncomfortable.
"It's just statistics!", said the statistical-search survival-selected statistically-driven-learning biological Rube Goldberg contraption, between methane emissions. Imagining that a gradient learning algorithm, optimizing its statistical performance, is the same thing as only learning statistical relationships.
Inconvertibly demonstrating a dramatic failure in its, and many of its kind's, ability to reason.
Reasoning exists on a spectrum, not as a binary property. I'm not claiming that LLMs reason identically to humans in all contexts.
You act as if statistical processes can’t ever scale into reasoning, despite the fact that humans themselves are gradient-trained statistical learners over evolutionary and developmental timescales.
> cretins proclaiming that LLMs aren't truly capable of reasoning
> Reasoning is not difficult to define
> Reasoning exists on a spectrum
> statistical processes [can] scale into reasoning
It seems like quite a descent here, starting with the lofty heights of condemning skeptics as "cretins" and insisting the definition is easy... down to what sounds like the introduction to a flavor of panpsychism [0], where even water flowing downhill is a "statistical process" which at enough scale would be "reasoning".
I don't think that's a faithful match to what other people mean [1] when they argue LLMs don't "reason."
You think because reasoning exists on a spectrum, therefore everything is conscious? You sound a bit too desperate to try to prove me wrong. Also you edited your comment post hoc to add sources. Who cares?
> Rather, the problem is, once you do define it, you will quickly find that LLMs are capable of it.
That’s really not what’s happening though. People who claim LLMs can do X and Y often don’t even understand how LLMs work. The opposite is also true. They just open a prompt and get an output and shout Eureka. Of course not everyone is like this, but majority are. It’s similar to what we think about thinking itself. You read these comments and everyone is an expert on human brain and how it works. It’s fascinating.
Notice how nearly every comment is just dancing around the issue and nitpicking instead of just owning up to whether or not they think LLMs are capable of reasoning.
I wish that was the case. I see most people, who believe to be experts, already made up their minds and being pretty sure about it.
I personally believe that not being sure about something, esp. on a topic as complicated and as this one, remaining open to different possibilities, and having a bit of skepticism, is healthy.