Not only is it subjective but V8 does so much to optimize JavaScript code that I wouldn't be surprised if the benefits for most applications were negligible anyway.
Although JavaScript is still an interpreted language, it basically gets "compiled" when the browser parses the bundle. On the surface, the only thing WebAssembly automatically gets you is you get to skip the runtime compilation phase.
I might be talking out of my ass, so take this with a grain of salt, but I wouldn't be surprised if once we start collecting real data on this stuff, SOME WebAssembly code could actually run slower than just using JS code. My hypothesis is that if you're starting with non-JavaScript code, you might be doing things in that language that would be slower to do the same way in JavaScript. I'm thinking of things like Array.map(), .filter() etc. ... which are hyper-optimized in V8. If you're taking an algorithm from C code or something which then gets compiled to WebAssembly, it's not an automatic given that it's going to compile to WebAssembly that is just as optimized as what V8 would do when it comes across those API calls. Again, this is just a hypothesis and I could be way off base.
In any case, what we need is real world data. I have no doubt that for certain applications you can probably avoid land mines by hiring devs who are experienced building certain performance-critical things at a lower-level than your average JS dev... and their experience in those languages may transfer very well to the browser. In this scenario, you're not getting huge perf wins from using WebAssembly per-se... you're getting huge perf wins for not doing typical stupid, lazy, ignorant things that most average JS devs do ... like cloning large objects using the spread operator and then doing that over and over and over again "because immutability."
WebAssembly is still a flavour of assembly. It's only nearly native performance to the real code because the interface to JavaScript has overhead. Every action in JavaScript incurs overhead due to dynamic types and objects, as well as dynamic memory allocation and garbage collection. Wasm can theoretically ignore it all and run as if it were compiled for the host system, except when it needs to interact with the JavaScript environment.
It's astonishing how fast JavaScript has become. But even if it were fully compiled, it would still be a language with higher overhead.
You can still write bad code, or compile a language with high overhead into WASM. This remains valuable for porting existing libraries into the browser and reducing bandwidth usage. But properly done with a fast compiled language like c or rust.... wasm can unlock some magical things into the web ecosystem.
> It's only nearly native performance to the real code because the interface to JavaScript has overhead.
That's not at all the only reason WASM is slower than native. WASM is bytecode. It still has to be JIT compiled, just like JavaScript. And WASM to begin with does not have a very complex instruction set, so the code generated by your language's LANG-to-WASM backend can't be optimized as heavily as its native backend.
As a rule of thumb (from my experience), you're almost never going to achieve significantly better performance in WASM than the equivalent algorithm written in optimized JS.
Eeh. Comparing a garbage collected jit language to bytecode jit parsing is... quite possibly the most insane argument you could make.
And what does instruction count have to do with optimization? Most languages optimize in architecture invariant representations before creating the bytecode. So the wasm binary is already optimized.
From searching the web to make sure; the language barrier between wasm and js is the highest performance bottleneck. So its generally recommended to not bother for simple algorithms until it gets better.
> Eeh. Comparing a garbage collected jit language to bytecode jit parsing is... quite possibly the most insane argument you could make.
Not understanding that WASM still has to be optimized and compiled to machine code, and then calling me insane over it, is certainly an approach to discourse
> And what does instruction count have to do with optimization?
Not going to bother with this one. Do some research into how compilers work, maybe.
> From searching the web to make sure; the language barrier between wasm and js is the highest performance bottleneck.
It certainly is. Not sure where I claimed it wasn't. What I'm saying is that there are also other reasons a program will run slower when compiled to WASM compared to when compiled to native.
> Not going to bother with this one. Do some research into how compilers work, maybe.
> so the code generated by your language's LANG-to-WASM backend can't be optimized as heavily as its native backend.
https://cs.lmu.edu/~ray/notes/ir/
Intermediary representations. Most modern compiled languages are optimised independently of the target architecture. So the code has been optimised way before it even became was text. the LANG-to-WASM backend has most, if not all optimisations that LANG-to-arm64 would have done. The final parser is nearly trivial in compute and complexity, making its implementation a pretty approachable intermediate programming exercise.
Comparing it to running a modern compiler optimisation for a high-order language is apples and oranges. The only optimisation realistically remaining is the processor's speculative execution engine.
> Not sure where I claimed it wasn't
> Not only is it subjective but V8 does so much to optimize JavaScript code that I wouldn't be surprised if the benefits for most applications were negligible anyway.
This kind of subjective, no? I wonder what they consider "near native speed"? I couldn't find any real numbers in their documentation.