That's a very fair and critical point. You're right that we can't change the fundamental, probabilistic nature of LLMs themselves.
But that makes me wonder if the goal should be reframed. Instead of trying to eliminate errors, what if we could change their nature?
The interesting hypothesis to explore, then, is whether a language's grammar can be designed to make an LLM's probabilistic errors fail loudly as obvious syntactic errors, rather than failing silently as subtle, hard-to-spot semantic bugs.
For instance, if a language demands extreme explicitness and has no default behaviors, an LLM's failure to generate the required explicit token becomes a simple compile-time error, not a runtime surprise.
So while we can't "fix" the LLM's core, maybe we can design a grammar that acts as a much safer "harness" for its output.
I would say we have this language already, too. It's machine code or its cousin, assembler. Processor instructions (machine code) that all software reduces down to are very explicit and have no default values.
The problem is that people don't like writing assembler, which is how we got Fortran in the first place.
The fundamental issue, then, is with the human language side of things, not the programming language side. The LLM is useful because it understands regular English, like "What is the difference between 'let' and 'const' in JS?," which is not something that can be expressed in a programming language.
To get the useful feature we want, natural language understanding, we have to accept the unreliable and predictive nature of the entire technique.
But that makes me wonder if the goal should be reframed. Instead of trying to eliminate errors, what if we could change their nature?
The interesting hypothesis to explore, then, is whether a language's grammar can be designed to make an LLM's probabilistic errors fail loudly as obvious syntactic errors, rather than failing silently as subtle, hard-to-spot semantic bugs.
For instance, if a language demands extreme explicitness and has no default behaviors, an LLM's failure to generate the required explicit token becomes a simple compile-time error, not a runtime surprise.
So while we can't "fix" the LLM's core, maybe we can design a grammar that acts as a much safer "harness" for its output.