This "color" thing is a terrible miscommunication, because the original article is about two things that don't even make sense in Rust's context:
• an architectural limitation of JavaScript which Rust doesn't have (in Rust you can wait for an async result in a sync function, or run CPU-heavy code from an async function).
• a wish that languages had implicit magic that made sync and async calls look the same, which Rust intentionally doesn't want, because implicit magic is a terrible footgun in low-level code. It can be hidden in a high-level VM language that is in charge of all I/O, syscalls, and locks. But that is counter-productive for a low-level systems language for implementing I/O drivers, kernel syscalls, and custom locking primitives.
So Rust is "purple".
In reality Rust's async is awesome for what it is: a syntax sugar for state machines, which is able to flatten an entire call tree into a single fixed-size struct.
Most other async architectures need at least one allocation per async call or per await, but Rust needs one allocation per the entire call graph with any number of async calls and awaits.
SSE supports long polling. You can make the server close the connection whenever you want. SSE supports automatic reconnection, and will even include the last ID seen to let the server continue seamlessly.
It's important to remember that SSE won't automatically reconnect for quite a few HTTP status codes (i.e., upstream proxy outages like 50x error codes)
We've been trying the "just don't write bugs" strategy for over 40 years now, and it's not working. Framing this as a problem with people being too stupid is a completely unproductive mix of hubris and elitism.
In the disciplines where real safety is required (like engineering, aviation, medicine), it's accepted that people will make mistakes. When a system can fail catastrophically due to a simple human error, it's the system that is broken, and needs to be made more robust.
I didn't say people are too stupid. On the contrary, we onboard fresh grads onto our large C++ code base every once in a while and they all seem to grasp the concepts just fine.
I'm wondering if most folks throwing shade at C++ had used pre-C++11 toolsets and just have bad memories of the experience.
FWIW, I find modern C++ genuinely great to read / easy to parse by humans (same for similar languages like C#, Java, JavaScript/Typescript) whereas reading Rust (in particular function definitions) is painful.
No, I mean humans in general, not software developers. Seriously, show a non-developer a printout of some average modern C++ code and some average rust code and see which one they think is easier to read and more visually pleasing.
See log4j vulnerability, you still need "just don't write bugs" with safe languages. What was really tried was "use C the hard way", which fails regularly as one can expect. Projects that use C the easy way have much better safety.
See ShellShock. Having memory safety vulnerabilities doesn't prevent other bugs. C and C++ projects still have logic errors, broken auth, XSS, SQL injection, and do dangerously dumb stuff, and that's on top of buffer overflows, user-after-frees, data races, and UB footguns.
Nobody promises that memory safety will fix all bugs, but it can prevent or significantly reduce a class of vulnerabilities, and reduce the total number of serious defects. And then time and effort saved on dealing with memory corruption bugs can be redirected towards dealing with all the other higher-level issues.
> Projects that use C the easy way have much better safety.
That's just another way of blaming programmers for not writing C without the bugs.
Every language can be perfectly safe if used correctly — even hand-written machine code. The problem is that it's easy to say "use C the easy way" (whatever that means), but actual real-world uses don't live up to such standard, and even the best programmers can make mistakes. Language safety is about making programs safer even when programmers write less-than-ideal code.
> I find it surprising that the writers of those government documents seem oblivious of the strengths of contemporary C++ and the efforts to provide strong safety guarantees
No, Bjarne needs to realize that RAII and smart pointers are an old concept now, and they have shown to be insufficient, for decades now.
The bar for safety has been raised, and the Modern C++ is so behind, they don't even understand the issue, and still talk like it's about the C sins, or programmers not using it properly, or "but what about other bugs?".
The contemporary C++ is still a safety-third design where safety always loses to performance and backwards compatibility. The std::span has been standardized without .at(), and they're just getting around to adding it. Bounds checking by default on `operator[]` is one thing where C++ could be on par with Rust right now, but that is going to be relegated to an optional profile.
Anecdotally, I feel like I wrote safer C code because at least I fully understood the behavior of the few standard library functions I used - both the good and the bad. C++ always had that question mark of if you were fully understanding all of the invariants the library placed on you to uphold.
That being said, I've fallen in love with Rust and have no intention of ever going back to C or C++.
I have a big gripe with rust and other modern languages like zig, and it’s difficult to reconcile it.
The gripe is that after using them for a while, anything else like java, python, c++ or C is underwhelming, boring and tedious, an exercise in how long you can uphold standards until you fall back to language defaults.
What these languages show, is something anyone used to compiled BASIC, Pascal dialects like Object Pascal, Modula-2 and such, already knew back in the day.
We had languages that were a pleasure to use for high level programming abstractions, while at the same time, provided the necessary features to go all the way down, even inline Assembly if it must be.
Without the culture that tooling much be hard, rather developers are users as well.
Thankfully a new generation of developers is bringing this culture back.
> Without the culture that tooling much be hard, rather developers are users as well.
I don't follow this. Could you elaborate?
I think the current culture is that tooling must be smart, accessible, and very well written, e.g rust (cargo, rustc, clippy, rust-analyzer), go coming with everything, same for gleam. Even zig is a better build system for C programs than most buildsystems for C.
I am talking about the C tooling culture, specially when comparing how compilers, linkers and makefiles are exposed, versus the Xerox PARC programming model that was largely adopted by other ecosystems.
The problem is that he is shouting to windmills, because the community at large doesn't care about all the features and tooling he keeps mentioning. Also some of that tooling is still quite buggy, e.g. VC++ and clang lifetime checkers.
Same applies to Herb Sutter's efforts, as of his recent blog post.
It doesn't matter how much tooling there has been available on the C and C++ ecosystem, a large majority will keep programming as they always did unless forced by external factors to change their ways.
Lint was created in 1979, and to this day many developers refuse to adopt static analysis on their C and C++ projects, let alone more modern tooling.
That's an easy statement to make but in practice it's no longer acceptable; the Internet has become far too hostile and code that's "never going to be connected to the Internet" constantly does.
The DNT (Do-Not-Track HTTP header) setting has died the moment Microsoft made it too easy to enable it during setup of Edge, making most of their users default to do-not-track.
The adtech will absolutely freak out and destroy any attempt to make such setting as soon as there's a risk of it working.
It has even been implemented in Internet Explorer when it had 90%+ market share.
And then Google intentionally sent malformed P3P header to bypass user preferences in IE.
When Safari added a heuristic rejecting Google's 3rd party cookies, Google has found a technical workaround to bypass it (and has been fined for doing this).
When IE and P3P were totally dead, browsers have tried to give adtech the simplest to implement bare minimum setting - the DNT header. The adtech has completely ignored it.
There are trillion-dollar businesses relying on tracking, and they will do whatever they can to undermine any technology and lobby against any law that would harm their business.
The market is working perfectly here, if you remember that users are not the customers. Users are the product sold to adtech, data brokers, law enforcement, etc.
For data-harvesting companies users are like livestock, and nobody cares about livestock's opinion. It only matters how much value can be extracted from users, even if it's annoying, misleading, and relies on dark patterns.
Yes, but that "livestock" can vote with their feet. If people cared about this, it would be a good opportunity for new entrants to take market share from incumbents by not using tracking cookies and thus not needing to have the cookie banners. But (to the parent comment's point) that isn't happening because this is not a compelling feature to offer, because people do not care about this, no matter how much we want them to.
I used to think this was just an education issue, that people just didn't understand the implications of privacy concerns on the web. But I no longer think this is the case. I think people do mostly understand, and just do not consider this a priority.
Apart from incremental compilation cache, a large chunk of it is debug information. Lowering debug info precision helps a lot (although it’s still suspiciously large.)
• an architectural limitation of JavaScript which Rust doesn't have (in Rust you can wait for an async result in a sync function, or run CPU-heavy code from an async function).
• a wish that languages had implicit magic that made sync and async calls look the same, which Rust intentionally doesn't want, because implicit magic is a terrible footgun in low-level code. It can be hidden in a high-level VM language that is in charge of all I/O, syscalls, and locks. But that is counter-productive for a low-level systems language for implementing I/O drivers, kernel syscalls, and custom locking primitives.
So Rust is "purple".
In reality Rust's async is awesome for what it is: a syntax sugar for state machines, which is able to flatten an entire call tree into a single fixed-size struct.
Most other async architectures need at least one allocation per async call or per await, but Rust needs one allocation per the entire call graph with any number of async calls and awaits.