Building safe abstraction around unsafe code works, because it reduces the scope of the code that has to be reviewed for memory safety issues.
Instead of the whole codebase being suspect, and hunting for unsafety being like a million-line "Where's Waldo?", it reduces the problem to just verifying the `unsafe` blocks against safety of their public interface, "is this a Waldo?". This can still be tricky, but it has proven to be a more tractable problem.
Cargo.lock is not ideal for this. It needs to be portable, and cover all kinds of builds and test runs, so it contains a superset of all dependencies for all platforms (recursively), as well as development-only dependencies, and everything needed for all optional features.
Running `cargo tree -e normal` gives a more realistic subset of what is actually used, and `cargo tree -e normal --no-default-features` gives you the "basically just needs" subset.
Another thing to keep in mind that Rust projects are very often split into many small packages (from the same authors, published as part of the same project). That isn't more code or more dependencies, but merely delivering code not as one monolith, but as modular components.
The major versions unfortunately have a dual role of dictating their release cadence and LTS support. They're not purely SemVer but also milestones and marketing versions.
This is unfortunate, because scheduled version bumps that happen to be SemVer-major bumps also give them excuse to regularly make actual breaking changes. When this affects npm packages, it creates churn and bitrot across the ecosystem.
Yes, but still if you have somebody looking over your shoulder, it can leak whatever you've been looking at.
There's a way to block that entirely for "secure' apps, but iOS could be smarter about this, and cache some stripped down view or expire that cache quicker.
Programmers make stupid mistakes in the safest languages too, even more so today when software is a career and not a hobby. What does it matter if the memory allocation is safe when the programmer exposes all user sessions to the internet because reading Dockers' documentation is too much work? Even Github did a variant of this with all their resources.
Because memory vulnerabilities don't make programs immune to other dumb mistakes. You get these vulnerabilities on top of everything else that can go wrong in a program.
Manual checking of memory management correctness takes extra time and effort to review, debug, instrument, fuzz, etc. things that the compiler could be checking automatically and reliably. This misplaced effort wastes resources and takes focus away from dealing with all the other problems.
There's also a common line of thinking that that because working in C is hard, C programmers must be smarter and more diligent, so they wouldn't make dumb mistakes like the easy-language programmers do. I don't like such elitist view, but even if true, the better programmers can allocate their smarts to something more productive than expertise in programs corrupting themselves.
Because memory vulnerabilities don't make programs immune to other dumb mistakes. You get these vulnerabilities on top of everything else that can go wrong in a program.
The issue is that these great new tools don't just fix the old vulnerabilities, they also provide a lot of new, powerful footguns for people to play with. They're shipping 2000 feet of rope with every language when all we need is 6 feet to hang ourselves.
There has been a bunch of failed C killers, and C++ has massively shat the bed, so I understand that people are jaded.
However, this pessimistic tradeoff is just not true in case of Rust — it has been focused from the start on preventing footguns, and actually does a great job of it. You don't trade one kind of failure for another, you replace them with compilation errors, and they've even invested a lot of effort into making these errors clear and useful.
C99 is still new! Microsoft tried to kill C by refusing to implement anything that wasn't also in C++. MSVC was 16 years late implementing C99, and implemented only the bare minimum. Their C11 implementation is only 11 years late.
I suspect that decades of C being effectively frozen have caused the userbase to self-select to people who like C exactly the way it is (was), and don't mind supporting ancient junk compilers.
Everyone who lost patience, or wanted a 21st century language, has left for C++/Rust/Zig or something else.
Most of us liking a good language just did not use MSVC. I do not think many people who appreciate C's simplicity and stability would be happy with C++ / Rust. Zig is beautiful, but still limited in many ways and I would not use it outside of fun projects.
I don't even use Windows, but I need to write portable libraries. Unfortunately, MSVC does strongly influence the baseline, and it's not my decision if I want to be interoperable with other projects.
In my experience, Windows devs don't like being told to use a different toolchain. They may have projects tied to Visual Studio, dependencies that are MSVC-only or code written for quirks of MSVC's libc/CRT, or want unique MSVC build features.
I found it hard to convince people that C isn't just C (probably because C89 has been around forever, and many serious projects still target it). I look like an asshole when I demand them to switch to whole another toolchain, instead of me adding a few #ifdefs and macro hacks for some rare nice thing in C.
Honestly, paradoxically it's been easier to tell people to build Rust code instead (it has MSVC-compatible output with almost zero setup needed).
You're talking about the dotcom bubble! A time in history where enormous amounts of money were thrown down enormous amounts of overhyped wells, and almost all of them turned out to be stupidly bad business.
People mocking that waste of money were 99% right, and Google is one of the very few exceptions.
They're certainly useful, and the goalposts for what is "intelligent" keep shifting, but I think we're seeing only progress towards more polished bigger LLMs. I don't think that progress implies there's a path from LLMs to something substantially better.
LLMs are still prone to hallucinations, and we're only adding more data and workarounds to make it happen less often. Prompt injections limit their usefulness on untrusted inputs. They can't do precise logic and reasoning, and are too likely to follow memorized patterns instead.
We've got something amazing, way better than what we've had before, but the current architecture is still based on a fuzzy translator. It's hard to say whether this is it, and it's going to plateau at this level for a while, or whether there are more breakthroughs around the corner.
Instead of the whole codebase being suspect, and hunting for unsafety being like a million-line "Where's Waldo?", it reduces the problem to just verifying the `unsafe` blocks against safety of their public interface, "is this a Waldo?". This can still be tricky, but it has proven to be a more tractable problem.