True, and in the case of Solomonoff Induction, incompleteness manifests in the calculation of Kolmogorov complexity used to order programs. But what incompleteness actually proves is that there is no single algorithm for truth, but a collection of algorithms can make up for each other's weaknesses in many ways, eg. while no single algorithm can solve the halting problem, different algorithms can cover cases for which the others fail to prove a definitive halting result.
I'm not convinced you can't produce a pretty robust system that produces a pretty darn good approximation of truth, in the limit. Incompleteness also rears its head in type inference for programming languages, but the cases for which it fails are typically not programs of any interest, or not programs that would be understandable to humans. I think the relevance of incompleteness elsewhere is sometimes overblown in exactly this way.
The limitations of “truth knowing” using an autoregressive transformer are much more pressing than anything implied by Gödel’s theorem. This is like appealing to a result from quantum physics to explain why a car with no wheels isn’t going to drive anywhere.
I hate when this theorem comes up in these sort of “gotcha” when discussing LLMs: “but there exist true statements without a proof! So LLMs can never be perfect! QED”. You can apply identical logic to humans. This adds nothing to the discussion.
That Wikipedia article is annoyingly scant on what assumptions are needed for the philosophical conclusions of Solomonoff's method to hold. (For that matter, it's also scant on the actual mathematical statements.) As far as I can tell, it's something like "If there exists some algorithm that always generates True predictions (or perhaps some sequence of algorithms that make predictions within some epsilon of error?), then you can learn that algorithm in the limit, by listing through all algorithms by length and filtering them by which predict your current set of observations."
But as mentioned, it's uncomputable, and the relative lack of success of AIXI-based approaches suggests that it's not even as well-approximable as advertised. Also, assuming that there exists no single finite algorithm for Truth, Solomonoff's method will never get you all the way there.
> "computability and completeness are mutually exclusive: any complete theory must be uncomputable."
This seems to be baked into our reality/universe. So many duals like this. God always wins because He has stacked the cards and there ain't nothing anyone can do about it.
Isn't there?
https://en.wikipedia.org/wiki/Solomonoff%27s_theory_of_induc...