From digging, it looks like the issue here (or one of the issues) is that the Web Assembly encoding for some Lisp uses cases (multiple value return) is not very compact. Making the proposed change would presumably reduce the size of Lisp code once compiled to Web Assembly, it would not really affect the behavior of the program once compiled to native code.
I can relate to the Web Assembly team's reluctance to add features which are only really wanted by a small subset of users, especially when they only affect the binary size. These features, if implemented, may suffer from poor test coverage. My own preference is that compact binaries are nice but if you're going to use a high-level language, an increase in binary size is just expected (say, an order of magnitude) unless the encoding/VM and language were designed in concert (Java + JVM, or C# + CIL are two examples). Heck, C++ binaries can be enormous.
Then again, I didn't dig deep enough to really understand the nuances of the argument. Perhaps someone could elaborate.
The target prevails over the language. The implicit intent is to convert LLVM bitcode.
"Please keep in mind that Wasm is not a user-facing language, it is a compilation target. To justify the extra complexity of this feature for its own sake, you would need to come up with convincing evidence that compilers would significantly benefit from it. I doubt that."
Why are you talking about SBCL here? This is not for the benefit of SBCL, which does not target Web-Assembly, but about implementing multiple return values easily. Yes not many languages have multiple return values today, but Web Assembly should not only focus on implement only the languages that are popular today but the languages of the future.
From what I understand, it really is just Lisp, not any language with multiple return values. The issue here is Lisp semantics for multiple return values being slightly cumbersome (but by no means difficult).
This is not only about code size, but about efficiency and, more fundamentally, about what multiple values are. Are MRV just syntactic sugar around lists, a second-class mechanism ("tuples don't nest") for when you have zero-or-one return value, or a first-class feature which can benefit from the support of the runtime compiler?
The last time you discussed about this[0], you attributed to Common Lisp some problems that were present with Scheme's multiple values.
And here, as explained in this thread, Wasm is not a user-facing language but the target of compiler, which means that the usual complaints about the somewhat verbose binding constructs of CL's values (which I find acceptable as a user) are not a problem either, since the code ought to be generated from other languages (which might even not define multiple values themselves, but for which a compiler could generate code that use MVR).
It seems that the only acceptable way to produce multiple values for you is to return a composite value. I think that having a dedicated type to represent multiple values is a useful tool to give to the programmer.
You can think that. I disagree. That's fine. Good on you.
And yes, I did mis-attribute some problems with Scheme's implementation of MVR to common lisp. However, I admitted to the mistake in that thread, and eventually got the idea.
So please, don't confuse the issue. Here, I'm not talking just about lisp. I just don't like MVR in general. And yes, I think that a composite value is the only good way to produce multiple values. But I don't have an objection to WASM adding MVR, because it's compiler facing, as you pointed out. I object to higher lever languages adding it.
Not really, aside from smaller binary size parsing a binary bytecode vs parsinga a JS pseudo bytecode is a huge difference in load time + they get to optimize it for efficient translation vs trying to hack JS constructs to get desired semantics and then trying to recognize those in the JIT
Asm.js is a hack made with ducktape and chewing gum, wasm is a solution designed/engineered to solve the general problem - not just smaller asm.js
ASM.js builds can be quite large, even 10s of MB is not uncommon. Reducing the binary size isn't just "nice to have", it changes the viability of the platform.
I don't think anyone is arguing that binary size isn't relevant. It just has to be weighed against the other parameters we want to optimize, like implementation complexity.
Any way to reconcile power-of-two memory structure and boundary checks? I can't imagine all code should be constrained to power-of-two memory, but if you throw in multi-threading somehow, I think it would start making more sense to have the best of both worlds.
I can relate to the Web Assembly team's reluctance to add features which are only really wanted by a small subset of users, especially when they only affect the binary size. These features, if implemented, may suffer from poor test coverage. My own preference is that compact binaries are nice but if you're going to use a high-level language, an increase in binary size is just expected (say, an order of magnitude) unless the encoding/VM and language were designed in concert (Java + JVM, or C# + CIL are two examples). Heck, C++ binaries can be enormous.
Then again, I didn't dig deep enough to really understand the nuances of the argument. Perhaps someone could elaborate.