Hacker News new | past | comments | ask | show | jobs | submit login

The divergent semantics risk between the interpreter and the compiler is a really big deal. It's genuinely difficult to get a language implementation to behave exactly as specified, even when the spec is do the same as some other implementation. Treating "compiled code" as specialising the interpreter with respect to the program is a great solution to that, since the bugs in the optimiser/partial-evaluator (they're kind of the same thing) are unlikely to be of the same class as bugs from independent implementations.

Wasm is a really solid target for heroic compiler optimisations. It's relatively precisely specified, user facing semantic diagnostics are in some language front end out of sight, aliasing is limited and syscalls are finite with known semantics. Pretty much because it was designed by compiler people. You've picked a good target for this technique.




One problem with Wasm is that in practice there's not as much optimization work to do as you might expect, as the higher-level compiler which produced it already di a lot of the work. Of course you still need to lower the stack machine + locals to actual spills/fills/etc., but still a chunk of the work is already done for you.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: