The Janet Street folks, who created this, also did an interesting episode[0] of their podcast where they discuss performance considerations when working with OCaml. What I was curious about was applying a GC language to a use case that must have extremely low latency. It seems like an important consideration, as a GC pause in the middle of high-frequency trading could be problematic.
I actually asked Ron Minsky about exactly this question on Twitter[0]:
Me: [W]hy not just use Rust for latency sensitive apps/where it may make sense? Is JS using any Rust?
Minsky: Rust is great, but we get a lot of value out of having the bulk of our code in a single language. We can share types, tools, libraries, idioms, and it makes it easier for folk to move from project to project.
And we're well on our way to getting the most important advantages that Rust brings to the table in OCaml in a cleanly integrated, pay as you go way, which seems to us like a better outcome.
There are also some things that we specifically don't love about Rust: the compile times are long, folk who know more about it than I do are pretty sad about how async/await works, the type discipline is quite complicated, etc.
But mostly, it's about wanting to have one wider-spectrum language at our disposal.
Well on their way.. by having to write a ton of C for the interpreter. I think it’s really imprudent for them not to be using Rust yet for critical sections.
The knobs aren't on the GC necessarily, rather language features.
With a Go compiler toolchain you have stack and global memory static allocation, use of compiler flags to track down when references escape, manually allocate via OS bindings, there is the unsafe package, and use slices with it, an assembler is part of the toolchain learn to use it, and regardless of CGO is not Go memes, it is another tool to reach for if Assembly isn't your thing.
Ohh right yes, now I get what you mean. My brain just immediately went for "GC knobs" when you mentioned "knobs", but in my defense I'm running a 40°C fever so I should probably not be commenting at all
GC compactions were indeed a problem for a number of systems. The trading systems in general had a policy of not allocating after startup. JS has a library, called "Zero" that provides a host of non-allocating ways of doing things.
I was bit by the same spider that gave you web tunnel vision. In any case, I find OCaml too esoteric for my taste. F# is softer and feels more..modern perhaps? But I don’t think GC can be avoided in dotnet.
> This is what I like to call a dialect of OCaml. We speak in sometimes and sometimes we gently say it’s zero alloc OCaml. And the most notable thing about it, it tries to avoid touching the garbage collector ...
> What I was curious about was applying a GC language to a use case that must have extremely low latency. It seems like an important consideration, as a GC pause in the middle of high-frequency trading could be problematic.
Regarding a run-time environment using garbage collection in general, not OCaml specifically, GC pauses can be minimized with parallel collection algorithms such as found in the JVM[0]. They do not provide hard guarantees however, so over-provisioning system RAM may also be needed in order to achieve required system performance.
Another more complex approach is to over-provision the servers such that each can drop out of the available pool for a short time, thus allowing "offline GC." This involves collaboration between request routers and other servers, so may not be worth the effort if a deployment can financially support over-provisioning servers such that there is always an idle CPU available for parallel GC on each.
You just let the garbage accumulate and collect it whenever markets are closed. In most cases, whenever you need ultra low latency in trading, you usually have very well defined time constraints (market open/close).
Maybe it's different for markets that are always open (crypto?) but most HFT happens during regular market hours.
No, compared to not doing so many allocations that freeing them is time consuming or expensive. Having allocations slow a program down means that there are way too many, probably due to being too granular and being in a hot loop. On top of that it means everything is a pointer and that lack of locality will slow things down even further. The difference between allocating many millions of objects and chasing their pointers and doing a single allocation of a vector and running through that can easily be 100x faster.
Probably? Locality becomes fairly important at scale. That’s why there’s a strong preference for array-based data structures in high-performance code.
If I was them I’d be using OCaml to build up functional “kernels” which could be run in a way that requires zero allocation. Then you dispatch requests to these kernels and let the fast modern generational GC clean up the minor cost of dispatching: most of the work happens in the zero-allocation kernels.
I think it is, but to be clear I think (from my very limited experience, just a couple of years before leaving finance, and the people with more experience that I've talked with) that c++ is still a lot more common than any GC language (typically java, since OCaml is even rarer). So it is possible, and some firms seem to take that approach, but I'm not sure exactly how besides turning off GC or very specific GC tuning.
Here is a JVM project I saw a few years back, I'm not sure how successful the creators are but they seem to use it in actual production. It's super rare to get even a glimpse at HFT infra from the outside so it's still useful.
Are you aware of how many allocations the average program executes in the span of a couple of minutes? Where do you propose all of that memory lives in a way that doesn’t prevent the application from running?
Haven't looked at the link, but I think for a scenario like trading where there are market open and close times, you can just disable the GC, and restart the program after market close.
[0] https://signalsandthreads.com/performance-engineering-on-har...