Hacker Newsnew | past | comments | ask | show | jobs | submit | achatham's commentslogin

We use c++ modules at Waymo, inside the google monorepo. The Google toolchain team did all the hard work, but we applied it more aggressively than any team I know of. The results have been fantastic, with our largest compilation units getting a 30-40% speedup. It doesn't make a huge difference in a clean build, as that's massively distributed. But it makes an enormous difference for iterative compilation. It also has the benefit of avoiding recompilation entirely in some cases.

Every once in a while something breaks, usually around exotic use of templates. But on the whole we love it, and we'd have to do so much ongoing refactoring to keep things workable without them.

Update: I now recall those numbers are from a partial experiment, and the full deployment was even faster, but I can't recall the exact number. Maybe a 2/3 speedup?


How much of the speedup you're seeing is modules, versus the inherent speedup of splitting and organizing your codebase/includes in a cleaner way? It doesn't sound like your project is actually compiling faster than before, but rather it is REcompiling faster, suggesting that your real problem was that too much code was being recompiled on every change (which is commonly due to too-large translation units and too many transitive dependencies between headers).


This was in place of reorganizing the codebase, which would have been the alternative. I've done such work in the past, and I've found it's a pretty rare skillet to optimize compilation speed. There's just a lot less input for the compiler to look at, as the useless transitive text is dropped.

And to be clear, it also speeds up the original compilation, but that's not as noticeable because when you're compiling zillions of separate compilation units with massive parallelism, you don't notice how long any given file takes to compile.


Are those actually the C++20 modules or clang modules (-fmodules)?


Clang modules. Sorry, didn't realize the distinction!


Clang modules are nothing like what got standardized. Clang modules are basically a cleaned up and standardized form of precompiled headers and they absolutely speed up builds, in fact that is primarily their function.


Were you using pre-compiled headers before?


At Waymo we use c++ modules via clang and got the demanded 5x speedup.

As the article mentions, you need a close relationship between the compiler and build system, which Google already has. The google build tooling team got modules to mostly work but only turned them on in limited situations. But we but the bullet and turned them on everywhere, which has sped up compilation of individual files by more than 5x (I forget the exact number).

The remaining problem is that sometimes we get weird compilation errors and have to disable modules for that compilation unit. It's always around templates, and Eigen has been gnarly to get working.


> “While TBD Labs is still relatively new, we believe it has the greatest compute-per-researcher in the industry, and that will only increase,” Meta said.

Well, two ways to make that true!


And imagine a hypothetical project that's 75% excavation. It'd never be built today, but if excavation gets cheaper the project could be feasible.

And then an explosion of underground bunkers and volcano lairs.


As a counter to your one example:

I've worked on autonomous vehicles for 16 years and my largest philanthropic effort is improving public transit. The common theme is being really interested in transportation and wanting it to work well for people.

Cruise was also the top funder of one San Francisco's recent MUNI funding ballot propositions (which just barely failed). You can certainly have a cynical take on that, but they still did it.


The paper under discussion only considers human accidents in similar environments to where Waymo operates. So it's only making a claim about like-for-like driving.

You could still say you care about snow driving and want to see that comparison, but it doesn't mean the claims in this paper are wrong.


What you're describing is L4. L4 is fully autonomous but with limitations on where/when it can operate. Level 5 is that but without restrictions.

Level 2 and 3 are the mostly-automated version, and they differ in how much notice they're supposed to provide and how much attention they require.


I think you may mean Level 4. The difference between 4 and 5 is that 5 doesn't have any territory/environmental constraints, but you said you don't mind those.


By focus on I mean that should be the goal.

If they require high speed cellular service then the system can’t scale to level 5 driving. Add a Starlink dish on top and the hardware could eventually scale to the entire continental US etc.


Your link literally says the opposite. Sales dropped from 2023 to 2024.


But the Bakersfield/Merced HSR line is an actual project, estimated at over $30B (it makes no sense independently but is part of LA-to-SF).

You seem to be comparing an actual, private tech project to what you wished public transit looked like, not what it actually looks like.

Public transit is awesome, but construction costs in the anglophone world are bananas (https://transitcosts.com/).


The Sheppard line is real. It was constructed about 20 years ago. I have used it.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: