Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Hypothetically, if you could daemonize javac, would JIT eventually kick in over multiple recompiles of the same code? The obvious use case for this would be IDEs, but I imagine it could work in CI too if you had some kind of persistent "compiler as a service" setup that outlived the runners.

Not to detract from the cool work done here, just curious if this other approach has been tried too.




> Hypothetically, if you could daemonize javac, would JIT eventually kick in over multiple recompiles of the same code?

This is actually how Gradle works by default; in fact, even if you pass `--no-daemon`, it will start a daemon, and just kill it after the build (lol). It's daemon-first, daemon-only, because of this exact issue.

As I understand it, Gradle's daemon timeout is typically 10 minutes, but can be extended. We are guessing that this style rarely hits the 10k class threshold where JIT performance converges with native, especially since Gradle also supports incremental compilation, and that further impedes the progress toward a warm JIT. Gradle focuses hard on build caching and incremental compilation, and this is conceptually in contention with reaching a warm JIT, ironically.

Native Image has JIT capabilities (just not as mature as Hotspot yet), and keeping the daemon alive would probably still yield wins. We haven't tried it yet and that's a good idea


I'm aware of the Gradle daemon, but wasn't sure if it only handled dependency resolution and other build orchestration tasks or if it also ran the compiler in the same JVM instance. Last time I worked somewhere using Gradle I think we forked the compiler anyway to ensure the project was built on exactly the same Java version regardless of what the Gradle environment was running in, so in that case there is definitely room for startup optimization.

I do recall the Gradle daemon living much longer than 10 minutes, though. The docs say 3 hours is default, although if really trying to maximize the JIT advantage it perhaps would make sense to keep it alive as long as possible.


> or if it also ran the compiler in the same JVM instance

Honestly, Gradle's toolchain resolution mechanisms have changed so much, it is a little hard to keep track. Good point

> The docs say 3 hours is default

Huh, I'm not sure where I got 10 minutes. I'll research some more about these assumptions, but even so, having used Gradle professionally and personally for many years (like you), I wonder how often I would have hit that threshold. I was mostly working in smaller projects and companies, though, so my experience could be different from the norm. Even assuming engineers hit that threshold, it would be toward the latter end of that window and only until the end of that window. An experience optimized for cold-start and non-reflection balances this approach, and is even build-cacheable with standard tools like sccache. There are still a lot of projects where JIT-based javac would be optimal

> perhaps would make sense to keep it alive as long as possible

It would for sure. I'm not sure why Gradle was never able to execute on Bazel's full vision for remote execution. Probably a hermeticity problem, considering the challenges they already seem to face with e.g. the configuration cache.

In any case, this is great feedback, thank you :)


I work professionally in Python these days, but I've just spent a bit of time digging into the contemporary situation because the JVM environment is much more interesting to me, and found this spec: https://build-server-protocol.github.io

Seems the Scala guys have doubled down on the "compiler as a service" approach, presumably because their compile time story continues to be painful. But also looks like the same solution is used for the VS Code Java/Gradle integration, so seems like this might be the more conventional way to go for traditional JVM projects.

For processes where the JIT takes a while to kick in, but also you don't want to waste memory keeping JVMs alive while not doing anything (and compilation could be a good example of that), I wondered if there was a way to snapshot and restore the JVM state and turns out some people are experimenting with that too: https://openjdk.org/projects/crac/

It's all neat stuff!


> I work professionally in Python these days

Elide can run Python using GraalPy, and we want to do some similar toolchain stuff in that universe. We are working on an integration with uv. Thanks also for linking this Build Server Protocol thing -- this looks very cool, very relevant, and I don't think I've seen this yet.

> Seems the Scala guys have doubled down on the "compiler as a service" approach, presumably because their compile time story continues to be painful

I wonder if we could run the Scala compiler. Reflection is a challenge in native mode and iirc Scala is reflection heavy, but I have no idea, I've never used it myself. I'll look into this

> snapshot and restore the JVM state and turns out some people are experimenting with that too

There are many efforts in this area: CrAC, Leyden, the new JDK 24 AOT stuff, and pre-warmup stuff before that. These all do make a difference but Native Image takes AOT optimization to a whole other level and seems to perform better for quick-startup quick-shutdown cases[1].

Elide also runs JVM code via Truffle/Espresso, and we're going to need a regular JVM integration. On this side CrAC and so on will be useful to cut down on startup time

[1]: https://github.com/simonis/LeydenVsGraalNative


For scala 2 at least, the standard library is very heavy and the compiler of course relies on it. It has a god object, predef, that cascades massive classloading if you touch anything in it. As a result, it is essential for latency to keep a warm jvm up to avoid reloading it.

Incremental rebuilds can actually be not too bad in scala when a warm jvm is used for compilation as it is by sbt and other scala build tools, but sbt also by default uses the warm jvm to run in process tests. This too can be reasonably quick but leads to problems if there are any resource leaks in the user's test suite.

Thus with sbt, one must either exercise discipline and care with tests to not leak anything or you will periodically have to restart sbt, which can be very painful because it is much heavier than the already heavy scala compiler.

It's a disappointing state of affairs because while there is a lot to like about scala, it is impossible to escape the bloat. Even if you opt out of sbt, the lack of modularity in the standard library forces workarounds, such as native-image, which had its own issues (including horrific build times for making native images and weak reflection support). Even if you or your org avoids the direct pitfalls, it is likely at some point you will end up debugging a dependency that doesn't.

I personally abandoned scala due to this bloat, which was a shame because I find it to have the best combo of expressiveness and pragmatism of any established general purpose languages.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: