Apple already has excellent x86 emulation. But Apple has locked-down GPU with a proprietary API that adds another unnecessary translation layer where it hurts more.
Partly it's due to lack of better ideas for effective inter-procedural analysis and specialization, but it could also be a symptom of working around the cost of ABIs.
The point of interfaces is to decouple caller implementation details from callee implementation details, which almost by definition prevents optimization opportunities that rely on the respective details. There is no free lunch, so to speak. Whole-program optimization affords more optimizations, but also reduces tractability of the generated code and its relation to the source code, including the modularization present in the source code.
In the current software landscape, I don’t see these additional optimizations as a priority.
The original DVD was way, way less green than later releases, which were changed to match the more-extreme greens used in the sequels. IDK if it was as subtle as in the theater (I did see it there, but most of my watches were the first-run DVD) but it was far subtler than later DVD printings, and all but IIRC one fairly recent blu-ray that finally dialed it back to something less eye-searing and at least close-ish to the original.
The Matrix is an interesting one because it really caught on with the DVD release. So that was most peoples first exposure to it, not the theatrical release. Even if incorrect, if that was the first way you saw it, it is likely how you consider it "should" look.
It's a bit disingenuous to imply The Matrix did not catch on until DVD release. The Matrix broke several (minor) box office records, was critically hailed, and an awards darling for the below the line technical awards.
Having said all that. One of the most interesting aspects of conversations around the true version of films and such is that just because of the way time works the vast majority of people's first experience with any film will definitely NOT be a in a theater.
I didn't meant to say no-one saw it theatrically but I probably did undersell it there.
The DVD was such a huge seller and coincided with the format really catching on. The Matrix was the "must have" DVD to show off the format and for many was likely one of the first DVDs they ever purchased.
It was also the go-to movie to show off DivX rips.
The popularity of The Matrix is closely linked with a surge in DVD popularity. IIRC DVD player prices became more affordable right around 2000 which opened it up to more people.
The original has a green tint to the matrix scenes, it's just relatively subtle and blends into a general coolness to the color temp. The heightened green from later home printings is really, in-your-face green, to the point you don't really notice the coolness, just greeeeen.
2. That's a language feature too. Writing non-trivial multi-core programs in C or C++ takes a lot of effort and diligence. It's risky, and subtle mistakes can make programs chronically unstable, so we've had decades of programmers finding excuses for why a single thread is just fine, and people can find other uses for the remaining cores. OTOH Rust has enough safety guarantees and high-level abstractions that people can slap .par_iter() on their weekend project, and it will work.
If you're a junior now, and think your code is worth stealing, it's probably only a matter of time before you gain more experience, and instead feel sorry for everyone who copied your earlier code (please don't take it personally, this is not a diss. It's typical for programmers to grow, try more approaches, and see better solutions in hindsight).
The lazy cheaters only cheat themselves out of getting experience and learning by writing the code themselves. It doesn't even matter whether you publish your code or not, because they'll just steal from someone else, or more likely mindlessly copypaste AI slop instead. If someone can't write non-trivial code themselves to begin with, they won't be able to properly extend and maintain it either, so their ripoff project won't be successful.
Additionally, you'll find that most programmers don't want to even look at your code. It feels harder and less fun to understand someone else's code than to write one's own. Everyone thinks their own solution is the best: it's more clever and has more features than the primitive toys other people wrote, while at the same time it's simpler and more focused than the overcomplicated bloat other people wrote.
Fil-C will crash on memory corruption too. In fact, its main advantage is crashing sooner.
All the quick fixes for C that don't require code rewrites boil down to crashing. They don't make your C code less reliable, they just make the unreliability more visible.
To me, Fil-C is most suited to be used during development and testing. In production you can use other sandboxing/hardening solutions that have lower overhead, after hopefully shaking out most of the bugs with Fil-C.
The great thing about such crashes is if you have coredumps enabled that you can just load the crashed binary into GDB and type 'where' and you most likely can immediately figure out from inspecting the call stack what the actual problem is. This was/is my go-to method to find really hard to reproduce bugs.
I think the issue with this approach is it’s perfectly reasonable in Fil-C to never call `free` because the GC will GC. So if you develop on Fil-C, you may be leaking memory if you run in production with Yolo-C.
Fil-C uses `free()` to mark memory as no longer valid, so it is important to keep using manual memory management to let Fil-C catch UAF bugs (which are likely symptoms of logic bugs, so you'd want to catch them anyway).
The whole point of Fil-C is having C compatibility. If you're going to treat it as a deployment target on its own, it's a waste: you get overhead of a GC language, but with clunkiness and tedium of C, instead of nicer language features that ground-up GC languages have.
It depends how much the C software is "done" vs being updated and extended. Some legacy projects need a rewrite/rearchitecting anyway (even well-written battle-tested code may stop meeting requirements simply due to the world changing around it).
It also doesn't have to be a complete all-at-once rewrite. Plain C can easily co-exist with other languages, and you can gradually replace it by only writing new code in another language.
Lack of DC fast charging makes the range even more limiting. It takes 2.7 hours to add another 150 miles. Modern EVs can add 150 miles of range in 10-15 minutes.
It's a recreational vehicle for booting around to and from the country club and out to the fancy places that European gentlemen go on afternoon Sunday drives to impress their mistresses.
Oh that reminds me, I should go check my lottery ticket.
Debian's tooling for packaging Cargo probably got better, so this isn't as daunting as it used to be.
Another likely thing is understanding that the unit of "package" is different in Rust/Cargo than traditionally in C and Debian, so 130 crates aren't as much code as 130 Debian packages would have been.
The same amount of code, from the same number of authors, will end up split into more smaller packages (crates) in Rust. Where a C project would split itself into components internally in a way that's invisible outside (multiple `.h` files, in subdirectories or sub-makefiles), Rust/Cargo projects split themselves into crates (in a Cargo workspace and/or a monorepo), which happen to be visible externally as equal to a package. These typically aren't full-size dependencies, just a separate compilation unit. It's like cutting a pizza into 4 or 16 slices. You get more slices, but that doesn't make the pizza bigger.
From security perspective, I've found that splitting large projects into smaller packages actually helps review the code. Each sub-package is more focused on one goal, with a smaller public API, so it's easier to see if it's doing what it claims to than if it was a part of a monolith with a larger internal API and more state.
reply