Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The problem is that the alternatives to comptime for generics generally seems to have a hideous effect on compile times (see: C++ and Rust).

Is there a language that does generics in such a way that doesn't send compile times to the moon?



Shouldn't comptime have the same compile time implications as templates? In both cases you're essentially recompiling the code for every distinct set of comptime args/template parameters.


Zig doesn't instantiate anything the doesn't get called. So, it doesn't have to generate a whole bunch of templated functions and then optimize down to the ones that actually get used.

The upside is that if you only call a generic function with a u32, you don't instantiate an f32 as well. The downside is that when you do decide to call that function with an f32, all the comptime stuff suddenly gets compiled for the f32 and might have an error.

In practice, I feel that I gain way more from the fast compile than I lose from having a path that accidentally never got compiled as my unit tests almost always force those paths to be compiled at least once.


> it doesn't have to generate a whole bunch of templated functions and then optimize down to the ones that actually get used.

It's been a long time since I've dealt with templated C++, but I thought this was how C++ does it too.

C++ will only generate functions for template parameters that are actually used, because it compiles a version of the templated function for each unique template parameters.


C++ is at the very least less lazy than Zig. As an example, if you write some constexpr expression that evaluates a ternary and instantiates a function differently in the two prongs, both will be instantiated, even the one that doesn't end up in the final program. Yes, there are workarounds, but I didn't end up using them. I just moved the offending assert from compile time to runtime because this particular code was not that important.


But the question is if that's actually decisive in the slow compilation problem. The solution in an eager language for evaluating too much stuff is basically "more if statements". Same thing in C++ metaprogramming, use more "if constexpr". If that's all it took to fix C++ compile times, it would have been done a decade ago. The actual problem is all the stuff that you do actually use that has to get repeatedly inlined and optimized away for zero cost abstraction to work.


No, I don't think it is a big deal for compilation speed.


C++ monomorphises generics on demand too. That's why it can have errors specific to specialization and why template error messages spam long causal chains.

C++ compile times are due to headers. Which in case of templates result in a lot of redundant work then deduplicated by the linker.


Zig is lazy, and C++ is eager. I can define an infinite set of mutually recursive types in Zig, and only the ones I actually use will be instantiated (not an everyday need, but occasionally interesting -- I had fun building an autodiff package that way with no virtual function overhead, and the set of type descriptors being closed under VJP meant that you could support arbitrary (still only finite) derivative-like tensors, not just first and second order).


As proven by C++ with modules and binary libraries, compile times can be better in C++.

Rust suffers because they compile everything from source, and the frontend sends piles of unprocessed LLVM IR to the traditional slow backend.

This can be improved with better tooling, one example is the Cranelift backend, there could be an interpreter, and so on.

Examples of languages that don't send compile times to the moon with similar polymorphic power, Standard ML, OCaml, Haskell, D, Ada.


AFAIK Part of the problem with Rust is also that it compiles crates individually before linking them and because of that cannot use the upfront knowledge of what's going to be needed, and as such a generic function that crosses the crate boundary is going to be handled twice by the compiler.

This was initially done so that Rust could compile things in parallel between crates by with spawning more rustc processes, which is obviously much easier than building a parallel compiler directly, but in the end it's suboptimal for performance.


comptime for generics is a superset of the things that C++ and Rust do for generics


Ocaml


OCaml doesn't monomorphize functions. Instead references to every type are the same size (either a tagged int or a pointer). This is a sweet spot for OCaml, but doesn't really work for a language that doesn't allocate everything on the heap.


Indeed. Ocaml is GC'd, and that makes the implementation different. However, the question was about compile times, and i dare to say Ocaml is one if the fastest ones out there, even tho it has an rich and expressive typesystem. The conclusion then needs to be that the type system expressiveness (complexity?) does not alone make for slow compile times.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: