Hacker News new | past | comments | ask | show | jobs | submit login

It's trivial to demonstrate that it takes only a tiny LLM + a loop to a have a Turing complete system. The extension of that is that it is utterly crazy to think that the fact it is "a model designed to predict sequences of tokens" puts much of a limitation on what an LLM can achieve - any Turing complete system can by definition simulate any other. To the extent LLMs are limited, they are limited by training and compute.

But these endless claims that the fact they're "just" predicting tokens means something about their computational power are based on flawed assumptions.






The fact they're Turing complete isn't really getting at the heart of the problem. Python is Turing complete and calling python "intelligent" would be a category error.

It is getting to the heart of the problem when the claim made is that "no matter how advanced the model" they can't be 'much more than just "really good autocomplete."'.

Given that they are Turing complete when you put a loop around them, that claim is objectively false.


I think it'd even be easier to coerce standard autocomplete into demonstrating Turing completeness. And without burning millions of dollars of GPU hours on training it.

Language models with a loop absolutely aren't Turing complete. Assuming the model can even follow your instructions the output is probabilistic so in the limit you can guarantee failure. In reality though there are lots of instructions LLMs fail to follow. You don't notice it as much when you're using them normally but if you want to talk about computation you'll run into trivial failures all the time.

The last time I had this discussion with people I pointed out how LLMs consistently and completely fail at applying grammar production rules (obviously you tell them to apply to words and not single letters so you don't fight with the embedding.)

LLMs do some amazing stuff but at the end of the day:

1) They're just language models, while many things can be described with languages there are some things that idea doesn't capture. Namely languages that aren't modeled, which is the whole point of a Turing machine.

2) They're not human, and the value is always going to come from human socialization.


> Language models with a loop absolutely aren't Turing complete.

They absolutely are. It's trivial to test and verify that you can tell one to act as a suitably small Turing machine and give it instructions to use to manipulate the conversation as "the tape".

Anything else would be absolutely astounding given how simple it is to implement a minimal 2-state 3-symbol Turing machine.

> Assuming the model can even follow your instructions the output is probabilistic so in the limit you can guarantee failure.

The output is deterministic if you set the temperature to zero, at which point it is absolutely trivial to verify the correct output for each of the possible states of a minimal Turing machine.


If you'd care to actually implement what you describe, I'm sure the resulting blog post would make a popular submission here.

It's not very interesting - it's basically showing it can run one step of a very trivial state machine , and then add a loop to let it keep running with the conversation acting as the tape io.

It's pretty hard to make any kind of complex system that can't be coerced into being Turing complete once you add iteration.


Seriously, get any instruct tuned language model and try to do one iteration with grammar production rules. It's coin flip at best if they get it right.

I have tried that many times and had good results.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: