Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why?


The limitations of “truth knowing” using an autoregressive transformer are much more pressing than anything implied by Gödel’s theorem. This is like appealing to a result from quantum physics to explain why a car with no wheels isn’t going to drive anywhere.

I hate when this theorem comes up in these sort of “gotcha” when discussing LLMs: “but there exist true statements without a proof! So LLMs can never be perfect! QED”. You can apply identical logic to humans. This adds nothing to the discussion.


Ah understood, yes that is a bit ridiculous.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: