I think the memory safety aspects of capabilities kind of missed the boat (often intrusive and breaking for memory unsafe languages whilst superfluous in practice for memory safe languages). The memory safety stuff is better dealt with by lots of small programs that don't share memory.
Its the higher-level, logical capabilities like 'can perform this kind of access to this specific file for the duration of this call' that are much more interesting.
Lots of modern operating systems do have some kind of capability system - even the intents in modern mobile phones are a capability system - but it's something you could imagine benefitting from machine support e.g. passing securely capabilities in syscalls in a microkernal and to peers in IPC.
> often intrusive and breaking for memory unsafe languages whilst superfluous in practice for memory safe languages
The Fil-C project (discussed in [0],[1]) has ported many programs to an emulated CHERI-like capability runtime, and shown that capabilities aren't actually that breaking in practice [0].
Another way of using CHERI would be in "Hybrid mode" with most of a program under a single capability, and using other capabilities for compartmentalisation.
In a system where you have only memory-safe languages, you'd still sometimes need to run older code from other languages, or code from external sources: and you can't always validate them 100%.
A couple operating system projects based on Rust (such as Theseus [1]) solve this by running them in WASM instances. A CHERI capability would be fast hardware-support for bounds-checking access to such a compartment.
Also, there is the problem of fast inter-process communication: Copying bytes vs. modifying PTEs — each method with different trade-offs.
With CHERI, you could potentially instead pass capabilities, and even share them by writing them in shared memory without involving the kernel.
> In a system where you have only memory-safe languages, you'd still sometimes need to run older code from other languages, or code from external sources: and you can't always validate them 100%.
One interesting angle for legacy code is used in Firefox. An image decoding library written in C couldn't be trusted, so it was built for WASM and then AoT translated to machine code. It's effectively in a WASM sandbox, without a full WASM runtime.
But, frankly, I'm on team full speed ahead with memory safe languages. It's not just "safer", the developer experience with Rust is so much better than C/C++.
For capabilities, a special instruction only makes sense in the context of CPU memory access and this is all about out of bounds C bugs. OS capabilities do not need new instructions.
And then, out of bound memory access may be better solved with better programming languages, thought of course we need to live with the legacy code.
I think what the parent means is we should be able to create syscall sandboxes within the same process (like a library not being able to do IO). Maybe I'm wrong but I think this could sort of be implemented with CHERI, by restricting syscalls to the official libc entry points (like OpenBSD) and requiring a capability pointer to access the functions.
I've only ever seen three reasons for Midori to shutdown:
1) they were hitting C# limitations (and started working on custom compilers etc) (and people involved in Midori say Rust has already shipped things they failed to do)
2) there was a bit too much academic overeagerness, e.g. software transactional memory will kill any project that attempts it
Midori is certainly an interesting project, but no; I meant the old "code access security" model that .NET Framework had.[0][1] Administrators (and other code) could restrict you from doing certain operations, and the runtime would enforce it. It was removed in .NET Core.[2]
Okay, that looks really funky. Like, libraries explicitly state what access they have ambient authority to use, and then callers can be constrained by an access control list, or something like that. Really weird design.
I'd love to see someone put genuine thought into what it would take to say that e.g. a Rust crate has no ambient authority. No unsafe, applied transitively. For example, no calling std::fs::open, must pass in a "filesystem abstraction" for that to work.
I think the end of that road could be a way to have libraries that can only affect the outside world by values you pass in (=capabilities), busy looping, deadlocking, or running out of memory (and no_std might be a mechanism to force explicit use of an allocator, too).
(Whether that work is worth doing in a world with WASM+WASI is a different question.)
Language safety helps get you 80% of the way there, but you are still working from software on top of fundametally unsafe hardware. Companies and agencies are and, increasingly, will pour money into hardware that give certain safety and security guarantees.
I think that any design firm that works on a RISC-V server CPU should seriously consider integrating CHERI support.
If only because it could be a selling point, potentially making RISC-V more competitive in the server market place.
Curious if this is similar to capabilities FreeBSD has had for ages ?
But I wish they would have chosen pledge(2)/unveil(2) from OpenBSD instead. Added that to your programs is so easy even I can do it.
I know that someone in Linux tried to add that to Linux. But IIRC it was in user space and harder to use. I think pledge/unveil really should be in the Linux kernel.
Its the higher-level, logical capabilities like 'can perform this kind of access to this specific file for the duration of this call' that are much more interesting.
Lots of modern operating systems do have some kind of capability system - even the intents in modern mobile phones are a capability system - but it's something you could imagine benefitting from machine support e.g. passing securely capabilities in syscalls in a microkernal and to peers in IPC.