Despite mounting evidence this perspective remains shockingly common. There seems to be some effect where one might convince oneself that while others are constantly introducing high severity bugs related to memory un-safety, I always follow best practices and use good tooling so I don't have that issue. Of course evidence continues to build that no tooling or coding practice eliminates the risk here. I think what's going on is that as a programmer I can't truly be aware of how often I write bugs, because if I was I wouldn't write them in the first place.
I sort of have this perspective, slowly changing… I think it comes from a fallacy of take a small 20-line function in C, it can be made bug-free and fully tested, a program is made of small functions, why can’t the whole thing be bug free? But somehow it doesn’t work like that in the real world.
> why can’t the whole thing be bug free? But somehow it doesn’t work like that in the real world.
It can be, if the composition is itself sound. That's a key part of Rust's value proposition: individually safe abstractions in Rust also compose safely.
The problem in C isn't that you can't write safe C but that the composition of individually safe C components is much harder to make safe.
And the reason for that is that a C API cannot express lots of things that are needed to make composing code easy. Ownership is one of them, another “when I’m done with the thing you gave me, how do I dispose of it?”
There would be far fewer bugs if people actually stick to writing code like that.
I once had to reverse engineer (extract the spec from the code) a C++ function that was hundreds of lines long. I have had to fix a Python function over a thousand lines long.
I am sure the people who wrote that code will find ways to make life difficult with Rust too, but I cannot regret having one less sharp knife in their hands.
On the other hand, parceling a large function into smaller functions can create indirection that is even harder to follow and harder to verify as bug-free.
Maybe, but it usually has the opposite effect. If you have well-designed and named functions the indirection is a feature since it reduces the amount of context you need to remember and gives you a visible contract instead of having to reason about a large block of code.
This works if you know what each callee performs, which you frequently don't when you're visiting a new codebase (it may have an unknown function you have to read to know, functionality not relevant to your problem, or you may have to analyze two or more functions in the context of each other to understand a bug).
> I think it comes from a fallacy of take a small 20-line function in C, it can be made bug-free and fully tested
It can't. You can make yourself 99.9...% confident that it's correct. Then you compose it with some other functions that you're 99.9...% about, and you now have a new function that you are slightly less confident in. And you compose that function with other functions to get a function that you're slightly less confident in. And you compose them all together to get a complete program that you wouldn't trust to take a dollar to the store to buy gum.
There's also a sort of dead sea effect at work. People who worry about introducing safety bugs use safe languages to protect themselves from that. Which means that the only people using C are people who don't worry about introducing safety bugs.
> Of course evidence continues to build that no tooling or coding practice eliminates the risk here
Concidering Rust is just tooling and coding practice in front of LLVM IR does this statement not also include Rust? There are in fact formally verified C and C++ programs, does that formal verification also count as tooling and coding practice and therefore not apply?
If either of the above is true why does it matter at all?
I am specifically calling out your blanket statement and want to open uo discussion about it because at present your implied point was it is impossible to write safe code in C/C++ and it is only possible in Rust, however the very point you made would also apply to Rust.
There are also non-safety issues that may affect the integrity of a program. I recently again looked into Rust, haven't given up just yet but to just instantiate a WGPU project the amount of incidental complexity is mind boggling. I haven't explored OpenGL but concidering that the unofficial webgpu guide for rust [1] recommends using an older version of winit because the current version would require significant rewrites due to API changes is not encouraging. Never mind the massive incidental complexity of needing an async runtime for webgpu itself, is this a pattern I am going to see in different parts of Rust. Rust already has enough complexity without injecting coroutines in places where blocking functions are reasonable.