> Is rust going to synchronize shared memory access for me?
Much better than that. (safe) Rust is going to complain that you can't write the unsynchronized nonsense you were probably going to write, shortcutting the step where in production everything gets corrupted and you spend six months trying to reproduce and debug your mistake...
> aren't they just annotations? proper use of mutexes and lock ordering aren't that hard, they just require a little bit of discipline and consistency.
Spatial memory safety is easy, just check the bounds before indexing an array. Temporal memory safety is easy, just free memory only after you've finished using it, and not too early or too late. As you say, thread safety is easy.
Except we have loads of empirical evidence--from widespread failures of software--that it's not easy in practice. Especially in large codebases, remembering the remote conditions you need to uphold to maintain memory safety and thread safety can be difficult. I've written loads of code that created issues like "oops, I forgot to account for the possibility that someone might use this notification to immediately tell me to shut down."
What these annotations provide is a way to have the compiler bop you in the head when you accidentally screw something up, in the same way the compiler bops you in the head if you fucked up a type or the name of something. And my experience is that many people do go through a phase with the borrow checker where they complain about it being incorrect, only to later discover that it was correct, and the pattern they thought was safe wasn't.
Proper use of lock ordering is reasonably difficult in a large, deeply connected codebase like a kernel.
Rust has real improvements here, like this example from the fuschia team of enforcing lock ordering at compile time [0]. This is technically possible in C++ as well (see Alon Wolf's metaprogramming), but it's truly dark magic to do so.
The lifetimes it implements is the now unused lexical lifetimes of early Rust. Modern rust uses non-lexical lifetimes which accepts a larger amount of valid programs and the work on Polonius will further allow more legal programs that lexical lifetimes and non lexical lifetimes can’t allow. Additionally, the “borrow checker” they implement is RefCell which isn’t the Rust borrow checker at all but an escape hatch to do limited single-threaded borrow checking at runtime (which the library won’t notice if you use in multiple threads but Rust won’t let you).
Given how the committee works and the direction they insist on taking, C++ will never ever become a safe language.
Oh and to add on, in c++ there’s no borrow checker and no language guarantees that exploit UB in the way Rust does with ownership. What does it matter if two parts of a single threaded program have simultaneous mutable references to something - it’s not a safety or correctness issue as there’s no risk of triggering UB and there’s no ill formed program that could be generated that way. IMHO a RefCell equivalent in C++ is utterly pointless.
Bit of a fun fact, but as one of the linked articles states the C++ committee doesn't seem to be a fan of stateful metaprogramming so its status is somewhat unclear. From Core Working Group issue 2118:
> Defining a friend function in a template, then referencing that function later provides a means of capturing and retrieving metaprogramming state. This technique is arcane and should be made ill-formed.
> Notes from the May, 2015 meeting:
> CWG agreed that such techniques should be ill-formed, although the mechanism for prohibiting them is as yet undetermined.
"Just" annotations... that are automatically added (in the vast majority of cases) and enforced by the compiler.
> proper use of mutexes and lock ordering aren't that hard, they just require a little bit of discipline and consistency.
Yes, like how avoiding type confusion/OOB/use-after-free/etc. "just require[s] a little bit of discipline and consistency"?
The point of offloading these kinds of things onto the compiler/language is precisely so that you have something watching your back if/when your discipline and consistency slips, especially when dealing with larger/more complex systems/teams. Most of us are only human, after all.
> how well does it all hold up when you have teamwork and everything isn't strictly adherent to one specific philosophy.
Again, part of the point is that Send/Sync are virtually always handled by the compiler, so teamwork and philosophy generally aren't in the picture in the first place. Consider it an extension of your "regular" strong static type system checks (e.g., can't pass object of type A to a function that expects an unrelated object of type B) to cross-thread concerns.
> aren't they just annotations? proper use of mutexes and lock ordering aren't that hard, they just require a little bit of discipline and consistency.
No, they are not. You also don't need mutex ordering as much since Mutexes in Rust are a container type. You can only get ahold of the inside value as a reference when calling the lock method.
> You also don't need mutex ordering as much since Mutexes in Rust are a container type. You can only get ahold of the inside value as a reference when calling the lock method.
Mutex as a container has no bearing on lock ordering problems (deadlock).
> What does rust have to do with thread safety and race conditions? Is rust going to synchronize shared memory access for me?
Rust’s strict ownership model enforces more correct handling of data that is shared or sent across threads.
> Speaking seriously, they surely meant data races, right? If so, what's preventing me from using C++ atomics to achieve the same thing?
C++ is not used in the Linux kernel.
You can write safe code in C++ or C if everything is attended to carefully and no mistakes are made by you or future maintainers who modify code. The benefit of Rust is that the compiler enforces it at a language level so you don’t have to rely on everyone touching the code avoiding mistakes or the disallowed behavior.
Rust's design eliminates data races completely. It also makes it much easier to write thread safe code from the start. Race conditions are possible but generally less of a thing compared to C++ (at least that's what I think).
Nothing is preventing you from writing correct C++ code. Rust is strictly less powerful (in terms of possible programs) than C++. The problem with C++ is that the easiest way to do anything is often the wrong way to do it. You might not even realize you are sharing a variable across threads and that it needs to be atomic.
> What does rust have to do with thread safety and race conditions? Is rust going to synchronize shared memory access for me?
Well, pretty close to that, actually! Rust will statically prevent you from accessing the same data from different threads concurrently without using a lock or atomic.
> what's preventing me from using C++ atomics to achieve the same thing
Now, is it okay to call `frobFoo` from multiple threads at once? Maybe, maybe not -- if it's not documented (or if you don't trust the documentation), you will have to read the entire implementation to answer that.
Now, is `frobFoo` okay to call from multiple threads at once? No, and the language will automatically make it impossible to do so.
If we had `&self` instead of `&mut self`, then it might be okay, you can discover whether it's okay by pure local reasoning (looking at the traits implemented by Foo, not the implementation), and if it's not then the language will again automatically prevent you from doing so (and also prevent the function from doing anything that would make it unsafe).
Speaking seriously, they surely meant data races, right? If so, what's preventing me from using C++ atomics to achieve the same thing?