Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't really get the obsession w/ compiler performance. At the end of the day you are trading performance optimizations (inlining, dead code elimination, partial evaluation, etc) for compile times. If you want fast compile times then use a compiler w/ no optimization passes, done. The compile times are now linear w/ respect to lines of code & there is provably no further improvement you can make on that b/c any further passes will add linear or superlinear amount of overhead depending on the complexity of the optimization.


The reason to care about compile time is because it affects your iteration speed. You can iterate much faster on a program that takes 1 second to compile vs 1 minute.

Time complexity may be O(lines), but a compiler can be faster or slower based on how long it takes. And for incremental updates, compilers can do significantly better than O(lines).

In debug mode, zig uses llvm with no optimization passes. On linux x86_64, it uses its own native backend. This backend can be significantly faster to compile (2x or more) than llvm.

Zig's own native backend is designed for incremental compilation. This means, after the initial build, there will be very little work that has to be done for the next emit. It needs to rebuild the affected function, potentially rebuild other functions which depend on it, and then directly update the one part of the output binary that changed. This will be significantly faster than O(n) for edits.


> The reason to care about compile time is because it affects your iteration speed. You can iterate much faster on a program that takes 1 second to compile vs 1 minute.

Color me skeptical. I've only got 30 years of development under the belt, but even a 1 minute compile time is dwarfed by the time it takes to write and reason about code, run tests, work with version control, etc.

Further, using Rust as an example, even a project which takes 5 minutes to build cold only takes a second or two on a hot build thanks to caching of already-built artifacts.

Which leaves any compile time improvements to the very first time the project is cloned and built.

Consequently, faster compile times would not alter my development practices, nor allow me to iterate any faster.


> Consequently, faster compile times would not alter my development practices, nor allow me to iterate any faster.

I think the web frontend space is a really good case for fast compile times. It's gotten to the point that you can make a change, save a file, the code recompiles and is sent to the browser and hot-reloaded (no page refresh) and your changes just show up.

The difference between this experience and my last time working with Ember, where we had long compile times and full page reloads, was incredibly stark.

As you mentioned, the hot build with caching definitely does a lot of heavy lifting here, but in some environments, such as a CI server, having minutes long builds can get annoying as well.

> Consequently, faster compile times would not alter my development practices, nor allow me to iterate any faster.

Maybe, maybe not, but there's no denying that faster feels nicer.


> Maybe, maybe not, but there's no denying that faster feels nicer.

Given finite developer time, spending it on improved optimization and code generation would have a much larger effect on my development. Even if builds took twice as long.


Can't agree. Iteration speed is magical when really fast.

I'm much more productive when I can see the results within 1 or 2 seconds.


> I'm much more productive when I can see the results within 1 or 2 seconds.

That's my experience today with all my Rust projects. Even though people decry the language for long compile times. As I said, hot builds, which is every build while I'm hacking, are exactly that fast already. Even on the large projects. Even on my 4 year old laptop.

On a hot build, build time is dominated by linking, not compilation. And even halving a 1s hot build will not result in any noticeable change for me.


Linking? There are two relatively simple improvements: using a faster linker and not using static helper libraries (dependency crates...) in debug builds. I understand that Rust has basically no useful support for deploying shared libraries, but they could still be useful to get faster debug builds. Well, I've never tried it, but it works well in C++. Dynamically linked binaries typically take a few milliseconds longer to start, but also take seconds less to link.


> I understand that Rust has basically no useful support for deploying shared libraries

Rust has excellent support for shared libraries. Historically they have involved downcasting to C types using the C ABI, but now there are more options like:

https://lib.rs/crates/stabby

https://lib.rs/crates/abi_stable

and https://github.com/rust-lang/rfcs/pull/3470


Same here, Rust currently isn't the best tooling for UI or game development.


> Further, using Rust as an example, even a project which takes 5 minutes to build cold only takes a second or two on a hot build thanks to caching of already-built artifacts.

So optimizing compile times isn’t worthwhile because we already do things to optimize compile times? Interesting take.

What about projects for which hot builds take significantly longer than a few seconds? That’s what I assumed everyone was already talking about. It’s certainly the kind of case that I most care about when it comes to iteration speed.


> So optimizing compile times isn’t worthwhile because we already do things to optimize compile times?

That seems strange to you? If build times constituted a significant portion of my development time I might think differently. They don't. Seems the compiler developers have done an excellent job. No complaints. The pareto principle and law of diminishing returns apply.

> What about projects for which hot builds take significantly longer than a few seconds?

A hot build of Servo, one of the larger Rust projects I can think of off the top of my head, takes just a couple seconds, mostly linking. You're thinking of something larger? Which can't be broken up into smaller compilation units? That'd be an unusual project. I can think of lots of things which are probably more important than optimizing for rare projects. Can't you?


The part that seems strange to me is your evidence that multi-minute compile times are acceptable being couple-second compile times. It seems like everyone actually agrees that couple-second iteration is important.


It seems like you might have missed a few words in this comment, I'm honestly having trouble parsing it to figure out what you're trying to say.

Just for fun, I kicked off a cold build of Bevy, the largest Rust project in my working folder at the moment, which has 830 dependencies, and that took 1m 23s. A second hot build took 0.22s. Since I only have to do the cold build once, right after cloning the repository which takes just as long, that seems pretty great to me.

Are you telling me that you need faster build times than 0.22s on projects with more than 800 dependencies?


This is the context:

> > The reason to care about compile time is because it affects your iteration speed. You can iterate much faster on a program that takes 1 second to compile vs 1 minute.

> Color me skeptical. I've only got 30 years of development under the belt, but even a 1 minute compile time is dwarfed by the time it takes to write and reason about code, run tests, work with version control, etc.

If your counterexample to 1-minute builds being disruptive is a 1-second hot build, I think we’re just talking past each other. Iteration implies hot builds. A 1-minute hot build is disruptive. To answer your earlier question, I don’t experience those in my current Rust projects (where I’m usually iterating on `cargo check` anyway), but I did in C++ projects (even trivial ones that used certain pathological libraries) as well as some particularly badly-written Node ones, and build times are a serious consideration when I’m making tech decisions. (The original context seemed language-agnostic to me.)


I see. Perhaps I wasn't clear. I've never encountered a 1 minute hot build in Rust, and given my experience with large Rust codebases like Bevy I'm not even sure such a thing exists in a real Rust codebase. I was pointing out that no matter how slow a cold build is, hot builds are fast, and are what matters most for iteration. It seems we agree on that.

I too have encountered slow builds in C++. I can't think of a language with a worse tooling story. Certainly good C++ tooling exists, but is not the default, and the ecosystem suffers from decades of that situation. Thankfully modern langs do not.


Yeah, I agree. Much like how the time you spend thinking about the code massively outweighs the time you spend writing the code, the time you spend writing the code massively outweighs the time you spend compiling the code. I think the fascination with compiler performance is focusing on by far the most insignificant part of development.


This is underestimating that running the code is part of the development process.

With fast compile time, running the test suite (which implies to recompile it) is fast too.

Also if the language itself is optimized towards making easy to write a fast compiler, this also makes your IDE fast.

And just if you're wondering, yes, Go is my dope.


I've worked with Delphi where a recompile takes a few seconds, and I've worked with C++ where a similar recompile takes a long time, often 10 minutes or more.

I found I work very differently in the two cases. In Delphi I use the compiler as a spell checker. With the C++ code I spent much more time looking over the code before compiling.

Sometimes though you're forced to iterate over small changes. Might be some bug hunting where you add some debug code that allows you to narrow things a bit more, add some more code and so on. Or it might be some UI thing where you need to check to see how it looks in practice. In those cases the fast iteration really helps. I found those cases painful in C++.

For important code, where the details matter, then yeah, you're not going to iterate as fast. And sometimes forcing a slower pace might be beneficial, I found.


> even a 1 minute compile time is dwarfed by the time it takes to write and reason about code, run tests, work with version control, etc.

You are far from the embedded world if you think 1 minute here or there is long. I have been involved with many projects that take hours to build, usually caused by hardware generation (fpga hdl builds) or poor cross compiling support (custom/complex toolchain requirements). These days I can keep most of the custom shenanigans in the 1hr ballpark by throwing more compute at a very heavy emulator (to fully emulate the architecture) but that's still pretty painful. One day I'll find a way to use the zig toolchain for cross compiles but it gets thrown off by some of the c macro or custom resource embedding nonsense.

Edit: missed some context on lazy first read so ignore the snark above.


> Edit: missed some context on lazy first read so ignore the snark above.

Yeah, 1 minute was the OP's number, not mine.

> fpga hdl builds

These are another thing entirely from software compilation. Placing and routing is a Hard Problem(TM) which evolutionary algorithms only find OK solutions for in reasonable time. Improvements to the algorithms for such carry broad benefits. Not just because they could be faster, but because being faster allows you to find better solutions.


Any delay between writing code and seeing the result is empty space in my mind where distractions can enter.


When you are writing and thinking about code, you are making progress. You are building your mental model and increasing understanding. Compile time is often an excuse to check hacker news while you wait for the artifact to confirm your assumptions. Of course faster compile times are important, even if dwarfed relatively by writing the code. It’s not the same kind of time.


But if those passes (or indeed any other step of the process) are inefficiently implemented, they could be improved. The compiler output would be the same, but now it would be produced more quickly, and the iteration time would be shorter.


I think it highly depends on what you're doing.

For some work I tend to take a pen and paper and think about a solution, before I write code. For these problems compile time isn't an issue.

For UI work on the other hand, it's invaluable to have fast iteration cycles to nail the design, because it's such an artistic and creative activity


> If you want fast compile times then use a compiler w/ no optimization passes, done. The compile times are now linear w/ respect to lines of code & there is provably no further improvement you can make on that b/c any further passes will add linear or superlinear amount of overhead depending on the complexity of the optimization.

Umm this is completely wrong. Compiling involves a lot of stuff, and the language design, as well as compiler design can make or break them. Parsing is relatively easy to make fast and linear, but the other stuff (semantic analysis) is not. Hence why we have a huge range of compile times across programming languages that are (mostly) the same.


clang -O0 is significantly slower than tcc. So the problem is clearly not (just) the optimisation passes.


People are (reasonably) objecting to Clang's performance even in -O0. Still, it is a great platform for sophisticated optimizations and code analysis tools.


Faster compiler means developers can go faster.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: