That's what C++ does because it has no way to ensure that you use the atomic reference counts in multi-threaded code. But, as the author writes in the blog post, Rust can in fact ensure this. So it lets you to use the more efficient non-atomic reference count for single-threaded use, saving the unnecessary cost of various memory access barriers.
Just because a language is designed for concurrent programming, it shouldn't make it impossible to achieve full single-threaded performance, as long as you're not compromising safety.
if you have only 1 thread, you don't atomic, and thus not using atomic reference counts is fine
but if you have more than 1 thread, you can't use a non-atomic refcount, so you can't use Rc but must use Arc.
"but that's such a simple change, just change the decl with a 1 char addition! Pluse, Rust won't let you do bad stuff if you've forgotten to change the type".
I guess I'm just old. Old enough that I've already implemented all the data structures and methods I need in C++, including safely passing around shared_ptr<T>.
And indeed you don't need to care about this, because Rust's type system is looking after this problem, if I use Jim's acrobatics crate, and Jim in turn used Sarah's tightrope crate, which happens to rely on Rc for an internal type which in turn ends up wrapped inside Jim's type which I'm using, my type knows it can't be sent across threads.
In Rust this roadblock is highlighted to Stephan. Aha, we cannot do this. Perhaps Stephan should ask the maintainer of the software they're using for a version which has the properties they desire for threaded use.
In C++ equivalent roadblocks are not sign posted. You may not even realise you're in trouble until some very strange errors begin to happen.
So for the first paragraph, this seems to be severely problematic for any software where someone you know and trust and who continues to have a good relationship with you is no longer available. How important this is will obviously vary, but predicating some important benefit of the language and saying that this benefit won't cause issues in the future because you can "ask the maintainer" is pretty unrealistic for proprietary software.
For the second paragraph, that depends a great deal on (a) what the mechanism used to "send a (shared, ref-counted reference thing) to another thread actually means and (b) what objects are used to accomplish this. Certainly simply writing the address of a shared_ptr<T> in C++ will work out as you indicate. But that's not the only way to do it. Rust's benefit comes from you being "unable" to do it an unsafe way; C++'s benefit comes from the fact that somebody has probably implemented the safe way in C++ already :)
> C++'s benefit comes from the fact that somebody has probably implemented the safe way in C++ already :)
You're an experienced C++ programmer, you already know what the "safe way" will be in C++. "Just don't make any mistakes". There's no possible way to benefit from multi-threading and yet use arbitrary non-thread-safe features magically without problems, the "genius" of C++ is finding a way to blame you for things you can't do anything about.
And then hit annoying roadblocks when you do want to pass those objects between threads?
You should write code to minimize the reference count bumps; they are waste of time whether atomic or not.
If the code spends 0.5% of its time bumping references, and you magically reduce that to zero using alien optimization technology, that only gives you a 0.5% improvement.
If the code spends 10% of its time bumping references up and down, something is wrong.
Yes, Rust also makes it quite easy to minimize reference count bumps. Rc values are moved by default, which introduces no traffic, and increments are explicit calls to `clone`. You can have both optimizations together!
It's even possible to share an Rc-managed value across threads without switching to Arc, as long as the other thread(s) never needs to change the reference count and can be "scoped" (https://doc.rust-lang.org/stable/std/thread/fn.scope.html) to some lifetime that some particular Rc outlives.
It would be very odd for such a transformation to be so difficult that it would impose a roadblock; after all, I'd think it would usually be the application deciding it only has a single thread, rather than something 10 dependencies up the line.
Not every program that's written will even be multithreaded, and there's a significant cost to using atomic operations when you don't need them. What is the disadvantage of having non-atomic Rc be available?
Because then you need to complicate the compiler with a diagnostic against misuse, which has to work 100% right in all situations and be maintained forever.
Because Rust has a safety culture, and provides threading, it is crucial that that the compiler will reject types you cannot safely send to another thread. So it does.
So the "diagnostic against misuse" you're concerned about is a necessary part of the compiler anyway.
Indeed, although Rc has this line:
impl<T: ?Sized, A: Allocator> !Send for Rc<T, A> {}
(which means roughly "You can't send this type to another thread")
It also has these lines:
// Note that this negative impl isn't strictly necessary for correctness,
// as `Rc` transitively contains a `Cell`, which is itself `!Sync`.
It's not just Arc<T> vs. Rc<T> that's relevant for thread safety, though. Pretty much any kind of shared mutability requires extra protection (locks or atomicity) to work safely across threads, so there has to be some way to indicate whether or not that extra protection is present. Not to mention objects that interact with FFI, such as mutex locks, which must be unlocked from the same thread. It would be a huge performance drain to demand that "every value everywhere must be usable from every thread".
So why even have such a thing in a language designed for concurrent programming from the ground up?
Arc should be called Rc, and that's it.