I wonder to what degree it’s actually necessary to explicitly manage memory spilling to disk like this. Want unified interface over non durable memory + disk? There already is one: memory with swap.
To get good performance from this strategy your memory layout already needs to be optimized to pages boundaries etc to have good sympathy for underlying swap system, and you can explicitly force pages to stay in memory with a few syscalls.
I prefer this abstraction as it is more widely supported (I’ve had to deploy to hosts that intentionally kill when you swap) and results in development assuming that the access may be at disk speed. When you rely on swap, I often see developers assuming everything is accessible at memory speed and then surprised when swap causes sudden degradation.
Yeah I mean if you have a latency sensitive workload, you don’t want page faults and swapping to give hidden latency spikes - it kills your P99 latencies
Yeah but you can pin important pages in RAM with mlock family of syscalls and friends like move_pages for explicit page management. This is what materialize does as far as I understand it.
Hi, foyer's author here. Page cache and swap are indeed good general proposed strategies and continuously evoluting. There are several reasons why foyer manages memory and disk control by itself rather than directly utilizing these mechanisms:
1. Leverage asynchronous capabilities: Foyer exposes async interfaces so that while waiting for IO and other operations, the worker can still perform other tasks, thereby increasing overall throughput. If swap is used, a page fault will cause synchronous waiting, blocking the worker thread and resulting in performance degradation.
2. Fine-grained control: As a dedicated cache system, foyer has a better understanding than a general proposed system like the operating system's page cache of which data should be cached and which should not. This is also why foyer has supported direct I/O to avoid duplication of abilities with the page cache since day one. Foyer can use its own strategies to know earlier when data should be cached or evicted.
I use Foyer in-memory (not hybrid) in ZeroFS [0] and had a great experience with it.
The only quirk I’ve experienced is that in-memory and hybrid modes don’t share the same invalidation behavior. In hybrid mode, there’s no way to await a value being actually discarded after deletion, while in-memory mode shows immediate deletion.
This is interesting. I would be curious to try a setup where I keep a local hybrid cache and transition blocks to deep storage for long-term archival via S3 rules.
Some napkin math suggests this could be a few dollars a month to keep a few TB of precious data nearline.
Restore costs are pricy but hopefully this is something that's only hit in case of true disaster. Are there any techniques for reducing egress on restore?
No idea, if I see a medium link I just ignore it. Substack is heading the same way for me too, it seems to be self-promotion, shallow-takes, and spam more than anything real.
The page loads a "subscribe to author" modal pretty quickly after the page loads. You may have partially blocked it, so you won't see the modal but it still prevents scroll.
Firefox has a lot of weird little pop up ads these days. It seems like this is a very recent phenominon. Is this actually Firefox doing this or some kind of plug-in accidentally installed?
Same. Hit escape shortly after the page loads to stop loading whatever modal is likely blocking scroll. I don't see the modal so it's likely blocked by ublock, but still stops scroll.
Foyer is a great open source contribution from RisingWave
We built a S3 read-through cache service for s2.dev so that multiple clients could share a Foyer hybrid cache with key affinity, https://github.com/s2-streamstore/cachey
Yes, currently it has its own /fetch endpoint that then makes S3 GET(s) internally. One potential gotcha depending on how you are using it, an exact byte "Range" header is always required so that the request can be mapped to page-aligned byte range requests on the S3 object. But with that constraint, it is feasible to add an S3 shim.
It is also possible to stop requiring the header, but I think it would complicate the design around coalescing reads – the layer above foyer would have to track concurrent requests to the same object.
Yes, definitely. S3 has a time to first byte of 50-150ms (depending on how lucky you are). If you're serving from memory that goes to ~0, and if you're serving from disk, that goes to 0.2-1ms.
It will depend on your needs though, since some use cases won't want to trade off the scalability of S3's ability to serve arbitrary amounts of throughput.
In that case you run the proxy service load balanced to get desired throughput or run a sidecar/process in each compute instance where data is needed .
You are limited anyway by the network capacity of the instance you are fetching the data from .
These are, effectively, different use cases. You want to use (and pay for) Express One Zone in situations in which you need the same object reused from multiple instances repeatedly, while it looks like this on-disk or in-memory cache is for when you may want the same file repeatedly used from the same instance.
Is it the same instance ? Rising wave (and similar tools )are designed to run in production on a lot of distributed compute nodes for processing data , serving/streaming queries and running control panes .
Even for any single query it will likely run on multiple nodes with distributed workers gathering and processing data from storage layer, that is whole idea behind MapReduce after all.
"Zero-Copy In-Memory Cache Abstraction: Leveraging Rust's robust type system, the in-memory cache in foyer achieves a better performance with zero-copy abstraction." - what does this actually mean in practice?
Hi, foyer's author here. The "zero-copy in-memory abstraction" is compared to Facebook's CacheLib.
CacheLib requires entries to be copied to the CacheLib managed memory when it's inserted. It simplified some design trade-offs, but may affect the overall throughput when in-memory cache is involved more than nvm cache. FYI: https://cachelib.org/docs/Cache_Library_User_Guides/Write_da...
Foyer only requries entries to be serialized/deserialized when writing/reading from disk. The in-memory cache doesn't force a deep memory copy.
I see, thanks! I don't have much experience in Rust, aside from some pet projects. Which features of Rust's type system are needed to implement such behavior? (It's unclear to me why I wouldn't be able to do the same in, for example, C++.)
I think the article could use more on the cache invalidation and write-through (?) behavior. Are updates to the same file batched or written back to S3 immediately? Do you do anything with write conflicts, which one wins?
The article hints that cache invalidation is driven by the layers higher up the stack, relying on domain knowledge.
For example, the application may decide that all files are read-only, until expired a few days later.
Not clear about write-cache. My guess is that you will want some sort of redundancy when caching writes, so this goes beyond a library and becomes a service. Unless the domain level can absolve you of this concern by having redundancy elsewhere in the system (eg feed data from a durable store and replay if you lost some s3 writes).
Storage Gateway is an appliance that you connect multiple instances to, this appears to be a library that you use in your program to coordinate caching for that process.
S3 Mountpoint is exposing a POSIX-like file system abstraction for you to use with your file-based applications. Foyer appears to be a library that helps your application coordinate access to S3 (with a cache), for applications that don't need files and you can change the code for.
> foyer draws inspiration from Facebook/CacheLib, a highly-regarded hybrid cache library written in C++, and ben-manes/caffeine, a popular Java caching library, among other projects.
> Cost is reduced because far fewer requests hit S3
I wonder. Given how cheap are S3 GET requests you need a massive number of requests to make provisioning and maintaining the cache server cheaper than the alternative.