I always enjoy reading articles like this. But the truth is, having written several 100s of KLOC in C++ (i.e., not an enormous amount but certainly my fair share) I just almost never have problems with this sort accidental conversion in practice. Perhaps it might trip me up occasionally, but will be noticed by literally just running the code once. Yes, that is an extra hurdle to trip over and resolve but that is trivial compared to the alternative of creating and using wrapper types - regardless of whether I'm using Rust or C++. And the cost of visual noise of wrapper types, already higher just at the writing stage, then continues to be a cost every time you read the code. It's just not worth it for the very minor benefit it brings.
(Named parameters would definitely be great, though. I use little structs of parameters where I think that's useful, and set their members one line at a time.)
I know that this is an extremist view, but: I feel the same way about Rust's borrow checker. I just very rarely have problems with memory errors in C++ code bases with a little thought applied to lifetimes and use of smart pointers. Certainly, lifetime bugs are massively overshadowed by logic and algorithmic bugs. Why would I want to totally reshape the way that I code in order to fix one of the least significant problems I encounter? I actually wish there were a variant of Rust with all its nice clean improvements over C++ except for lifetime annotations and the borrow checker.
Perhaps this is a symptom of the code I tend to write: code that has a lot of tricky mathematical algorithms in it, rather than just "plumbing" data between different sources. But actually I doubt it, so I'm surprised this isn't a more common view.
> I just almost never have problems with this sort accidental conversion in practice.
95% of C++ programmers claim this, but C++ programs continue to be full of bugs, and they're usually exactly this kind of dumb bug.
> will be noticed by literally just running the code once.
Maybe. If what you're doing is "tricky mathematical algorithms", how would you even know if you were making these mistakes and not noticing them?
> the cost of visual noise of wrapper types, already higher just at the writing stage, then continues to be a cost every time you read the code. It's just not worth it for the very minor benefit it brings.
I find wrapper types are not a cost but a benefit for readability. They make it so much easier to see what's going on. Often you can read a function's prototype and immediately know what it does.
> Certainly, lifetime bugs are massively overshadowed by logic and algorithmic bugs.
Everyone claims this, but the best available research shows exactly the opposite, at least when it comes to security bugs (which in most domains - perhaps not yours - are vastly more costly): the most common bugs are still the really dumb ones, null pointer dereferences, array out of bounds, and double frees.
The guild of software developers has no real standards, no certification, no proven practices outside <book> and <what $company is doing> while continuing to depend on the whims of project managers, POs and so-caled technical leaders and others which can’t tell quality code from their own ass.
There’s usually no money in writing high-quality software and almost everything in a software development project conspires against quality. Languages like Rust are a desperate attempt at fixing that with technology.
I guess it works, in a way, but these kind of blog posts just show us how inept most programmers are and why the Rust band-aid was needed in the first place.
Maybe. But I wouldn't diss better languages, linters, and other tool inprovements. These systematically increase quality at very low cost. It boggles my mind that the whole industry is not falling over itself to continuously embrace better tools and technology.
I don’t think that’s true. C++ just has a lot of baggage to deal with and people are doing the best they can with some ridiculous constraints. The sanitizers and things like clang-tidy, and better analysis in compilers seem to be really well received.
This. The industry is a hot-pot of gut feelings/seat of my pants mixed with true engineering and mathematical rigor.
It is all hit or miss. Everyone claims they do high-quality, critical software in public, while in private, they claim the opposite, that they are fast and break things, and programming is an art, not math.
And then you have venture capital firms now pushing "vibe coding."
Software development is likely the highest variance engineering space, sometimes and in some companies, not even being engineering, but "vibes."
It is interesting how this is going to progress forward. Are we going to have a situation like the Quebec Bridge [https://colterreed.com/the-failed-bridge-that-inspired-a-sim...]. The Crowdstrike incident taking down the whole airspace proved that is not enough. Market hacks in "decentralized exchanges," the same. Not sure where we are heading.
I guess we are waiting for some catastrophe that will have some venture capital liable for the vibe coding, and then we will have world wide regulation pushed on us.
Software developers no, but Software Engineering does, it is a professional title in many countries, where universities and engineers are only legally allowed to use such titles after being validated.
I really don't think this matters. It's the processes beyond physical engineering that make it better. I have seen the most highly accredited software devs write the worst code. I have seen self-taught devs deeply consider implication of the things they are writing and solve the problem simply. And it's not even the case that I can perceive a trend. Well except for PhDs, I have met only one PhD that I know can write actual good code.
I think if PhDs are bad at code it is because they ain't coding in a team 7.6 hours a day. It is not the PhDness rather than not being a coder. (Sone PhDs might code a lot as a hobby and be good).
You can also write a medical PhD without knowing how to do first aid on a snake bite.
My current project is a huge C++ physics sim written over 15+ years. The most common and difficult to diagnose bug I’ve found is unit conversation mistakes. We likely wouldn’t even find them if we didn’t have concrete data to compare against.
I wrote such a type library myself, and it worked great. However we eventually realized it was the wrong answer because you so commonly want to display that thing and nobody wanted to write each widget to have a different api for each other the thousands of different types in my library.
The current system is a runtime system which has one type, and you set what the unit system is in the constructor. However it means adding a meter to a gallon is a runtime error.
Er... why would you need a different API for this? Why not just convert to unitless at the point where the number flows into the UI?
Or better yet, parametrize the UI library so that it can display any unit automatically. It could even pull the metadata (like the string for the unit) from the template, so that e.g. binding a kg<float> value to a label would automatically show as "42 kg" etc.
At the point where you convert to unitless you lose all the benefit of strong types. Which is a trade off maybe you can accept. One part of that trade off that you are adding the ability to convert to unitless and thus making it easy to use unitless where you shouldn't.
there are many different ways to implement unit systems. I know of 3 other attempts someone made on just our project to make units work before we settled on this one. There are trade offs and I don't mean to imply I have presented the correct answer for your problem. Only that because of many other reasons (not stated because of NDA) this is the best compromise for us. Your problem is different and you will need to find your own solution. There are lots of possible answers and each has a set of pros/cons. If your problems are simple than you can find an off the shelf answer, but if you need to do complex things you will need to consider what you really want.
I understand that there may be other requirements that dictate the design. It's just that the specific one that you gave originally - that of displaying the values - strikes me as something that shouldn't be difficult even when you have units reflected statically in the type system.
> At the point where you convert to unitless you lose all the benefit of strong types.
Right, but if you do that right before it actually gets displayed, then it shouldn't be a problem since all actual computations have been performed already? I mean, you also need to convert numbers to text to render them, so all the same arguments apply.
So e.g. in the "ideal" desktop app using MVVM, that would happen on the boundary between the model and the view-model.
> One part of that trade off that you are adding the ability to convert to unitless and thus making it easy to use unitless where you shouldn't.
But any typed unit-of-measure system inherently has such an ability if it tracks units properly for arithmetic operations - you just divide by 1 of the same unit to get a unitless value. E.g. in F#:
let v = 10<m/s> // in m/s
let n = v / 1<m/s> // unitless
> but will be noticed by literally just running the code once.
I assure you that's not the case. Maybe you didn't make that mistake, but if you did I'm sure it sometimes went unnoticed. I've found those issues in my code and in other projects. Sometimes they even temporarily don't matter, because someone did a func(CONST, 0) instead of func(0, CONST) and it turns out CONST is 0 - however the next person gets a crash because they change 0 to 1. A lot of similar issues come from the last line effect https://medium.com/@Code_Analysis/the-last-line-effect-7b1cb... and can last for years without being noticed.
I had a friend who noticed that people were often mixing up the arguments to some std constructor (I think it was string with a char and other integer argument getting swapped.) He searched across Google's codebase and found many (I don't remember the exact number) cases of this, many that he could confirm to be real bugs. He spent months fixing them and I think eventually got some check added to prevent this in the future.
So this definitely isn't some theoretical problem. I wouldn't even be surprised if you had made this mistake just hadn't noticed.
I understand this concern, but at the same time it's not hard to write clang-query statements for the ones you care about. Sometimes it is even a regex! And it's not too expensive to upstream universally relevant checks to clang-tidy.
The main problem is that too many C++ engineers don't do any of that. They have some sort of learned helplessness when it comes to tooling. Rust for now seems to have core engineers in place that will do this sort of on behalf of everyone else. Language design aside, if it can find a way to sustain that kind of solid engineering, it will be hard to argue against.
>code that has a lot of tricky mathematical algorithms in it, rather than just "plumbing" data between different sources
Your hierarchy is backwards. Borrowing for algorithmic code is easy, it's for writing libraries that can be used by others where it's hard. Rust lets you - makes you - encode in in the API in a way C++ can't yet express.
> I just very rarely have problems with memory errors in C++ code bases with a little thought applied to lifetimes and use of smart pointers
If these are sparing you C++ bugs but causing you to struggle with the borrow checker, it's because you're writing code that depends on constraints that you can't force other contributors (or future you) to stick to. For example, objects are thread-unsafe by default. You can use expensive locks, or you can pray that nobody uses it wrong, but you can't design it so it can only be used correctly and efficiently.
This article presents something I’d expect competent C++ programmers with a few years of experience to know.
Unfortunately, many programmers are not competent. And the typical modern company will do anything in its power to outsource to often the lowest bidder, mismanage projects and generally reduce quality to the minimum acceptable to make money. That’s why one needs tools like Rust, Java, TypeScript, etc.
Unfortunately, Rust is still too hard for the average programmer, but at least it will hit them over the hands with a stick when they do something stupid. Another funny thing about Rust is that it’s attracting the functional programming/metaprogramming astronauts in droves, which is at odds with it being the people’s programming language.
I still don’t think it’s a valuable skill. Before it was lack of jobs and projects, which is still a problem. Now it’s the concern that it’s as fun as <activity>, except in a straitjacket.
(Named parameters would definitely be great, though. I use little structs of parameters where I think that's useful, and set their members one line at a time.)
I know that this is an extremist view, but: I feel the same way about Rust's borrow checker. I just very rarely have problems with memory errors in C++ code bases with a little thought applied to lifetimes and use of smart pointers. Certainly, lifetime bugs are massively overshadowed by logic and algorithmic bugs. Why would I want to totally reshape the way that I code in order to fix one of the least significant problems I encounter? I actually wish there were a variant of Rust with all its nice clean improvements over C++ except for lifetime annotations and the borrow checker.
Perhaps this is a symptom of the code I tend to write: code that has a lot of tricky mathematical algorithms in it, rather than just "plumbing" data between different sources. But actually I doubt it, so I'm surprised this isn't a more common view.