GraalVM is interesting technology. I've been playing a little bit with native-image in a Kotlin project and it allows me to build native binaries from my Kotlin code. With support for a lot of the existing java/kotlin library ecosystem.
The binaries are a bit large (10MB for a Kotlin hello world) but they are fast. I'll be using this for some personal cli tools since Kotlin+Maven is my personal 10x platform.
Besides cli projects I've also done some experimentation with GUI and database stuff. Using the Gluon plugin GraalVM is able to compile a native binary for a JavaFX app that talks to a Sqlite database.
When using a library that relies on reflection there might be some graalvm config to fiddle with but mostly it just works, and some of the libraries are already "native-image ready" with the necessary config inside the published package.
Have you tried taking a look at UPX for the binary size? I found that it significantly decreased the size of the binary to the point where the problem was more the size of my dependencies (large JARs that could not be optimized away) and could get the binary to a completely manageable place. They aren't C small, but they were definitely better than traditional Java apps by an order of magnitude.
I had not heard of UPX until just now and just did a quick test. With `-7` level it brings the 10MB binary down to 3.6MB but execution time was roughly x10.
I haven't touched this in a long-while, but I believe startup will be impacted but post startup the impact should be smaller once the binary has been loaded. It depends on how your executable is used of course, but a hello world application will show the worst performance degredation since the code is loaded then executed once.
UPX used to be great when reading from disk took way too long... and having small binaries to read from disk and unpacking in memory was faster... but now in the age of SSDs, I don't know how much sense it makes to compress binaries.
Back in the day, I used to use UPX on most apps, which would greatly improve launch times.
native-image takes a minute to compile the binary from the jar but after that it's pretty smooth. I noticed my css styles where a bit different here and there but that could be due to differences in the default stylesheet used in my jvm build (OpenJDK with OpenJFX) vs the native-image build (using Gluon and maybe a customized JavaFX version)
That's awesome to hear! If you could share any working Hello World repos or similar using Gluon and SQLite I would massively appreciate it. About 5 years ago I developed a GUI desktop app with TornadoFX, Spring Boot and SQLite and it turned out to be incredibly solid, and I've always wanted to go back and try to find out a quick template repo with Gluon to try and do more small desktop apps again.
I'm working on it but nothing to share yet. I think Gluon itself has a few starter/example projects. The Xerial sqlite library already has native-image configuration files in its build so no extra config required. I'm still trying to get Jdbi to work fully with native-image and once that is done I will create a PR there. But my time is limited in the next week or two.
Also, I'm using plain JavaFX with Kotlin instead of TornadoFX. Mostly because I'm not sure about the health of the TornadoFX project and I don't like some of its abstractions.
Do you have a link to any example repos by chance? The need has slowly been growing for some cli tools I want to write, but I haven't taken the plunge yet. I'm the most curious about peeking at your maven setup.
I don't have anything public right now but the setup is pretty simple (although I use Gradle with the Kotlin DSL instead of Maven). It's your standard kotlin or java setup that can build a jar with dependencies included, and 1 extra build step to invoke `native-image` from the Graal SDK.
I tried Graal/Truffle last year, to make a toy language of my own. Unfortunately the state of documentation and tutorials was simply not good enough and I had to give up on it. I think for someone who is a Java/Graal enthusiast, Truffle documentation may be enough, but for someone like me who knows just the basics of Java and its ecosystem, the Truffle language implementation docs seemed woefully inadequate.
Its a shame though, for what I did understand of Truffle seemed very interesting. I do plan on trying truffle yet again in a few years, when hopefully the community size and the state of the documentation have improved.
Do you have any feedback on how we could improve the docs? If so, please let us know.
I believe the easiest way to start a new Truffle language implementation is to fork SimpleLanguage [1] and turn it into your language. Did you try to do that?
1. Starting post ast generation was an abrupt start. An end to end tutorial - from parsing to working compiler for a tiny language like Lua, Lox, Wren etc would be very helpful. You need not deep dive into the parsing part - just give enough that we can follow along. Another option can be to continue an existing tutorial series like Lox from "crafting interpreters". That way you don't have to focus on parts which are not Truffle specific, yet users can follow along.
2. Just going through existing Java code of the Simple Language was extremely difficult for a newbie to Java, like me. I would much prefer a readable tutorial which explains all concepts in more details
3. More language examples please. As I said before, if possible do add a couple more languages like Lua. I believe Lua is already a Truffle language. Just an accompanying tutorial is missing. I remember when I tried to read through the codes of Ruby, Lua and simple language, they all started off very differently and I just got lost.
4. More tutorials. Outside of the main docs, I found only 1 comprehensive tutorial. I think it will be great if the key members make it a priority to add smalllish tutorials on things like forth, brainfuck etc in other blogs and articles.
5. Tutorials in Kotlin! I am new to the jvm, but I am digging Kotlin as a saner alternative. I think having some tuts in Kotlin would be a great help.
a. Point people to the papers to understand how the 'magic' happens.
b. Are really intended for people who already know how to build language VMs from scratch.
If you've never implemented a language interpreter at all, then you're going to struggle. Arguably not Truffle's fault, but hey, you can't drop the cost of making SOTA language runtimes by a couple orders of magnitude and then be surprised when a whole lot of newbies show up wanting to try their hand at it :)
I am sorry to hear you did not succeed in building your own language. I have to admit it is quite a steep learning curve. If you decide to retry do not hesitate to hit us up with questions on the community slack. We are a helpful bunch.
I was in a similar place. I tried implementing a toy language with it but it felt like I had to be very comfortable in Java-land to be able to use it, I just didn't have the desire to put that much effort :P
But I'm sure it's a great technology and with time the docs will get better and better.
I did and to be fair the people there were very helpful. It's just that I needed more hand holding than the average person (as I was new to Java as well)
As mentioned by sibling poster, it really is not a setback, they just have different release schedules.
Graal is doing pretty well, it is heavily used by Twitter for example. And the polyglot part is simply completely novel and very exciting - it can basically optimize across language boundaries with it. A python call to a C function might get inlined and be much faster than native FFI. Enterprise edition also has managed mode for LLVM bitcode, making for example existing C code use managed heap for allocations. TruffleRuby, Graal’s ruby implementation is the fastest ruby runtime by a huge margin and the JS runtime developed by comparatively few people (compared to V8 and the like) can achieve similar performance to it.
TruffleRuby, Graal’s ruby implementation is the fastest ruby runtime by a huge margin
This is interesting to me as someone working on a Rails app currently. I'm surprised it hasn't become the defacto ruby implementation given the benchmarks shown on the site. What are the drawbacks? Just Oracle ownership? Or does it require an enterprise subscription?
> This is interesting to me as someone working on a Rails app currently. I’m surprised it hasn’t become the defacto ruby implementation given the benchmarks shown on the site. What are the drawbacks?
It is not supported on all platforms and listed as “experimental” (defined as “ features [which] are being considered for future versions of GraalVM and are not meant to be used in production”) on the platforms (Linux AMD64/ARM64, macOS [AMD64, presumably, since there is a separate listing for MacOS ARM64 where it is not supported]) that it is supported on. [0] That’s probably the main thing keeping it back from general adoption. (It’s also ~2 years behind MRI on features, but that’s less important.)
There are also notes in the TruffleRuby repo and docs that it:
(1) is possibly not fast on Rails and large programs (which are probably the bulk of where people would be looking for performance gains),
(2) has some minor incompatibilities (individually no big deal, but the aggregate could be a factor)
(3) Fibers (Ruby’s main lightweight concurrency construct) don’t have the lightweight performance they do in CRuby because they use full native threads.
This is partly covered in the GitHub Readme of TruffleRuby:
TruffleRuby can run Rails and is compatible with many gems, including C extensions. However, TruffleRuby is not 100% compatible with MRI 3.0 yet. Please report any compatibility issues you might find. TruffleRuby passes around 97% of ruby/spec, more than any other alternative Ruby implementation.
TruffleRuby might not be fast yet on Rails applications and large programs. Notably, large programs currently take a long time to warmup on TruffleRuby and this is something the TruffleRuby team is currently working on. Large programs often involve more performance-critical code so there is a higher chance of hitting an area of TruffleRuby which has not been optimized yet.
However, I think this is slightly outdated, because I vaguely remember a talk from one of the rubycons about Rails being slightly faster with truffle. I can't say for sure why it's not been widely picked up yet; maybe other players are looking forward to JITs within Ruby itself, or don't like the two tier model of GraalVM (you can run it for free, but it also has an enterprise version with additional performance improvements.)
20 years ago, I was a lurker on the JikesRVM (IBM's research project building a JVM in Java) mailing list. JikesRVM is still around, but not terribly active last I checked. Oracle is quite late to the game, and I'm just generally sad that Oracle ended up owning Java instead of IBM (or perhaps Google, but my trust in Google seems to be monotonically decreasing).
Last I checked, Oracle's AoT compiler threw away any metadata necessary for HotSpot to be able to trace its way trough the native code, so if your AoT-compiled code was part of a hot loop, HotSpot wouldn't be able to perform any inlining or other runtime optimizations of your code. It seems like fixing this would be a first step to having a high performance JVM written in Java with start-up/warm-up time comparable to a JVM written in C++.
While we're at it, Erlang/Elixir/BEAM's NIFs seem the right way to implement native code extensions to your VM. You write implementations in your language that get replaced with native versions if the native library is successfully loaded. Maybe your non-native implementation is just a stub that throws if it's called. With a few lines at the top of your module, you can attempt to load one or more native libraries and handle/ignore any errors. This makes it much easier to gracefully degrade if a particular native library isn't available on the local machine (or even available for the platform). It's a real pain to fall back to a Java implementation if the JNI implementation isn't available for any reason.
I worked on Maxine when the IBM acquisition of Sun rumors hit in 2009. I wasn't the only one excited. When it fell through and Oracle stepped in, there was a lot of gloom. I am pleasantly surprised that Oracle invested so much in the project afterwards.
Point of order since native-image is a party piece for Graal: AOT is more in the sense of JIT (but, well, ahead of time) than a self-standing binary a la native-image, right?
GraalVM has always been built and maintained outside of the OpenJDK. Staring with Java 9 Oracle started including it with OpenJDK to provide AOT compilation, but as the article says, it wasn't widely adopted by OpenJDK users.
Most people who wanted AOT would use GraalVM directly, so for most people nothings changed. Those that did use the OpenJDK AOT will have to migrate to using GraalVM's tooling post Java 15.
Yeah could be. I was just thinking that organizationally being part of OpenJDK sounds like a big thing and pulling back from that sounds like a reversal of a big thing. I'm sure the team wasn't extremely happy about it.
That move was much more reflective of the organizational politics inside Oracle as much as the tech itself. Graal turned into a separate (some might say competing) implementation of the Java platform, with its own unique vision and entirely separate teams, rather than simply feeding code into OpenJDK via its own processes.
Swapping out a part of a JVM as critical as the JIT compiler with a from-scratch rewrite would be a tremendously difficult and risky project even if the rewrite was done by the exact same team. Doing it when the rewrite is in a different language, all the knowledge is in a different team, in a different part of the company organizationally, in different offices and parts of the world, and also their implementation strategy to eliminate performance drops involves a completely different JVM implementation (SVM-in-HotSpot) ... well, the maintenance, budgetary and training complexities of that alone make it a tough one to digest.
I'm very hopeful they'll eventually figure out a path forward. It doesn't make much sense for Oracle to maintain three different JIT compilers (C1, C2, Graal). The biggest sticking point technically is probably the need to use native-image to produce libgraal. It'd be kinda weird for OpenJDK/HotSpot to ship two different JVMs, one that's used to implement the other. If they can find a way to AOT compile libgraal with acceptable performance, without a reliance on native-image, that'd probably be a good next step.
If you love GraalVM, and very small app sizes, and get a nice dependency injection framework to break your code up into testable modules, be sure to check out https://quarkus.io . Be sure to scroll down to the memory and start time benchmarks :)
Micronaut (from Grails developers) was an early Graal adopter and is very actively developed and has a growing ecosystem around it.
However, Quarkus has many distinguishing features beyond a nice dependency injection framework. First of all it is a Microprofile implementation - Microprofile is a collaborative effort among many large companies to adapt and evolve pieces of enterprise java stack to microservices. So large parts of your application would depend on Microprofile API rather than APIs specific to a particular framework like Quarkus and will be portable across Microprofile implementations (like Helidon or OpenLiberty) - for smaller individual services the benefits are minimal but it is a major advantage for larger corporations putting in decade long investments. Also, Quarkus is built atop vert.x which provides a non blocking event-loop and actor(-like) framework for JVM. Developers can choose to not bother with vert.x at all and use the MP apis exclusively but for people who like the actor-oriented programming model can choose to take advantage of it (perhaps for specific use cases). Lastly, as can be expected because it is by Redhat, there is out of the box integration with Hibernate.
Regarding the scene, Spring is also pushing into the Graal ecosystem with native features - but they are still a bit late to party. They do have a huge ecosystem to support and can't afford breaking changes so it is understandable that good spring support for Graal is an evolving multi-year effort. But it does mean that early adopters like Micronaut and Quarkus have an edge.
Also, irrespective of how well mainstream adoption for Graal pans out, I am quite happy about the growing initiative to move start-time work to build time which improves start times for non-graal deployments too.
My world revolves around Common Lisp (with Python for deep learning), but Clojure is also an important language to me because of both professional use and I wrote a Clojure AI book.
What is the Graalvm + Clojure situation? A quick web search shows some use cases. I find the idea of using high programmer-efficient Lisp languages and then building small and fast native applications to be compelling. That said, LispWorks, SBCL, and Allegro CL are all good for building standalone apps.
SCI (https://github.com/babashka/sci) is a Clojure interpreter that allows you to evaluate Clojure code even inside of the final native binary and is used in all of the above projects.
Feel free to bug me with questions in the graalvm channel on Clojurians Slack.
I want to slap a bunch of side projects on a small VPS. With Go, this is no problem. Scp the binary (which includes an embedded SQLite), configure it to run as a systemd service, add it to my reverse proxy rules.
I’d love to use Clojure for this, but it doesn’t seem well suited to running in a small, shared environment. Or is it now that there’s AOT?
Pretty similar with Clojure, though because it's not the default mode of operation, more stuff will break. Still, we do this all the time and it's mostly fine.
That said, seems like you should be able to do something pretty similar fairly easily with jars and the JVM (you may not want to run lots of JVMs in the method you're describing, since if you're calling these processes in a one-shot, CGI-style fashion, the JVMs startup time and memory requirements is going to be annoying.)
GP also linked babashka which may be closer to your interests, as its an interpreter for small scripts like these, but also comes with a mechanism (pods) for loading dependencies (like SQLite).
I'm running a side project in Clojure (a guess-the-number-of-covid-cases lottery, https://koronalotek.pl) on a small VPS (Scaleway DEV1-S), and it is no problem either. It runs happily in the 2 GB RAM that the server has; I just installed the JVM, configured a nginx reverse proxy, and am using the standard cli-tools in a screen session to run it.
If you have a clojure project that builds to a single .jar file at hand, you can give it a quick shot using the native-image command from the GraalVM. If you're not using reflection-heavy libraries chances are it will just work and spit out a binary.
It can compile Clojure to native executables but do note that Graal’s AOT part doesn’t promise better performance, just faster startup time and reduced memory usage. Using JIT compilers for long running processes (either the usual OpenJDK’s hotspot or Graal’s JIT compiler) will very likely beat the AOT compiled version for long running processes.
"Using JIT compilers for long running processes (either the usual OpenJDK’s hotspot or Graal’s JIT compiler) will very likely beat the AOT compiled version for long running processes."
The JIT has runtime and profile guided data to determine what to do. It can observe behaviors about how the program is actually being used and better optimize or even remove code that isn’t going to execute.
I'm running Clojure code compiled with GraalVM native for AWS Lambda. Cold start times are low and performance is decent enough, even with CE edition. The whole process is reasonably painless through the use of Holy Lambda https://github.com/FieryCod/holy-lambda
Not quite what you were asking for, but I wanted to chip in as another happy Clojure + GraalVM native user.
I haven't touched GraalVM in a couple years, but the big-ish project that I did with it in Clojure was mostly painless, except for the fact that I had to use type-hinting a lot more frequently, due to the reduced reflection capabilities of GraalVM.
I think Project Loom support will open up green thread support for more Truffle-based languages. I could see Go, Haskell, Erlang, Scheme, and Concurrent ML -style languages taking advantage of that infrastructure.
I'm also hoping for official support for static linking. Right now, it's an undocumented feature done through the Feature API.
yes shopify runs on truffleruby. It is the fastest ruby VM by far and can seamlessly call state of the art Java libraries.
GraalVM is the VM to rule them all and unify currently incompatible billions of dollars libraries ecosystems.
Despite being the computer science breakthrough of the decade, people are not getting it yet.
I believe so. Especially with the possibility of optimizing across language barriers could mean that a C function can be inlined, resulting in possibly faster execution than native FFI.
The problem to my knowledge is that such a complex platform can even depend on some buggy side-effect down the line, so perfect interop is hard.
Sure, but unless I’m mistaken the Rails ones tend to give Truffle a major advantage compared to most real-world production use because they use the sqlite3 in-memory DB rather than an out-of-process DB—so the DB is fully embedded in a C extension, which is subject to the Truffle JIT—and the I/O of normal DB access is avoided.
I opened a ticket some time ago about performance with Jekyll and liquid templates. At least in that case, yjit was way faster. I'm happy to retest though. Anything that would make my jekyll builds faster would help.
Just FYI - I don't know if your response was intended to deflect on an eta but some might perceive it as that since you didn't indicate if 2022 would be the year. Which might imply to some that it's not.
The binaries are a bit large (10MB for a Kotlin hello world) but they are fast. I'll be using this for some personal cli tools since Kotlin+Maven is my personal 10x platform.
Besides cli projects I've also done some experimentation with GUI and database stuff. Using the Gluon plugin GraalVM is able to compile a native binary for a JavaFX app that talks to a Sqlite database.
When using a library that relies on reflection there might be some graalvm config to fiddle with but mostly it just works, and some of the libraries are already "native-image ready" with the necessary config inside the published package.