While safe Rust may be relatively simple to write, and certainly easier to write than safe C, this article has someone added to my belief that unsafe Rust is far too difficult to write. Perhaps some of this is deliberate, as a kind of defence mechanism against people using it willy-nilly, it still seems over-designed.
It's a pretty normal experience for lookin at standard libraries (I mean look at cpython, where you now need to know C++, cpython interpreter internal,C++ std in addition to all the complicated meta class magic you normally don't need to care about.)
Like in Vec we have following things most normal rust code wouldn't do:
- Unique<u8> instead of Unique<T>, to micro optimize code size
- RawVec, RawVecInner, due to the previous point, internal structure of libstd, relations to libcore/liballoc usage and more clean separation between unsafe and safe code etc. Which a "just my own vec" impl might not have
- internal `Unique` helper type to reuse code between various data structures in std
- the `NonNull` part is another micro optimization which might save 1-4 byte for `Option<Vec<T>>` (up to 4 due to alignment, but I think this optimization isn't guaranteed in this context)
Like it could be just a `Vec<T> { ptr: *const T, cap: usize, len: usize, _marker: PhantomData<T> }`, but that wouldn't have quite the maximized (and micro optimized) fine tuning which tends to be in std libraries but not in random other code. (I mean e.g. the code size micro-opt. is irrelevant for most rust code, but std goes into _all_ (std using) rust code, so it's needed.
But anyway, yes, unsafe rust can be hard. Especially if you go beyond "straight forward C bindings" and write clever micro optimized data structures. It's also normally not needed.
But also "straight forward C bindings" can be very okay from complexity _as long as you don't try to be clever_ (which in part isn't even rust specific, it tends to just become visible in rust).
I think most of the complexity here comes from trying to write as little unsafe code as possible (and not repeating any unsafe operation in multiple places, hence the many layered abstractions).
If you were to implement Vec without layering, it would be no more complicated than writing a dynamic array in C (but more verbose due to all the unsafe blocks and unabbreviated names)
Notably, some of the abstraction is actually there to prevent the compiler from generating a lot of redundant code. The process of monomorphization (turning polymorphic generic code into usable machine code for particular types) can seriously bloat the size of binaries.
Would you consider this to be indicative of the quality of the C++ language? I think the standard library should usually be an example of optimal code with respect to performance, and when it is very complicated this would indicate, to me, that the language makes good code complicated.
If you look at an advanced C library, you'll see perhaps some odd tricks or nifty algorithms, but you probably won't see something that leaves you scratching your head about what the code is even asking the computer to do.
Unsafe Rust does have tooling to help you not make some kinds of mistakes, but by its nature you have to be able to press on after the tools can't help you.
For example MIRI can understand pointer twiddling under strict provenance. It'll synthesize provenance for the not-really-pointers it is working with and check what you wrote works when executed. But if your algorithm can't work with strict provenance (and thus wouldn't work for CHERI hardware for example) but your target is maybe an x86-64 Windows PC so you don't care, you can write exposed or even just arbitrary "You Gotta Believe Me" pointer twiddling, MIRI just can't check it any more when you do that, so, hope you never make mistakes.