There's more nuance to "don’t use lifetimes". Novices typically don't understand the relationship between owning and borrowing, and try to use temporary references where they semantically don't belong (in Rust ownership can't be as ambiguous as in GC languages, and references can't be made out of thin air — they borrow from somewhere that must already be owned/stored somewhere).
So often the advice is not "don't use lifetimes, because the borrow checker can't handle them", but rather "don't use lifetimes where an owned value is required" or "don't use lifetimes until you understand how ownership works".
`cargo check` is relatively fast compared to `cargo build`, so at least currently, borrow checking is a small task relative to code generation, optimization, and linking.
There have been many similar changes in the compiler's lifetime, and so far additional type checking work hasn't caused noticeable slowdowns, apart from a few accidentally-quadratic bugs.
• Devs and their managers don’t use the 8GB version.
• Electron is used because it’s cheap to develop for. You can hate its RAM-eating guts all you want, but for cross-platform UIs there’s no competition. Nobody wants to pay for maintenance of 5 separate front ends in 3 languages.
For a very long time there has been no viable alternative in the strictly-no-GC space. Every new systems-adjacent language concluded "GC is fine for 99% of programs" (which is true), and then excluded themselves from the no-GC niche. Rust almost did the same thing — early design assumed per-thread GC.
It's also why I'm doubly sceptical of fusion, and really annoyed when it's presented as a solution we'll have "soon". Even if a breakthrough was invented tomorrow that made fusion actually work, it would still require building a massive number of fusion plants, and building them cheaper than renewables+batteries.
I call a bluff on that. 64KB is small enough that it's not just yet another target. It requires programming in a completely different way, one that is unacceptably ugly and contorted for 32-bit and larger targets.
If you're writing code and not specifically targetting 64KB systems, your code will be completely unusable on such systems anyway. Most programs and libraries written for larger platforms will have more code than a 16-bit target can even address.
Even if you use theoretically correct sizes, they'd still be inappropriate. 16-bit lengths are a bloat if you could fit data in 8 bits, and you'd try hard to do so. There's hardly any room for stack, so even function arguments and local variables are a luxury to be avoided. Real 16-bit programs are often just a one huge function and all global variables.
There are platforms with 16-bit int and a larger than 16-bit address bus, such as AVR and Amiga.
> Real 16-bit programs are often just a one huge function and all global variables.
I've written software for extremely resource-constrained microcontrollers (7KB flash, 256 bytes of RAM) in "normal C" with many functions, code organized in multiple files, etc. In one case it was to replace firmware written by someone with your mindset, and the resulting code was smaller while having more features.
Just to nitpick, on Linux it depends on your distribution (or yourself) having enabled zram; some modern distributions enable it by default on recent releases, but yours might not have.
So often the advice is not "don't use lifetimes, because the borrow checker can't handle them", but rather "don't use lifetimes where an owned value is required" or "don't use lifetimes until you understand how ownership works".