I don't disagree that memory access is nowadays critical for speed, but I haven't found Rust standing in the way of optimizing it.
As I've pointed out in the article, Rust does give you precise control over memory layout. Heap allocations are explicit and optional. In safe code. You don't even need to avoid any nice features (e.g. closures and iterators can be entirely on stack, no allocations needed).
Move semantics enables `memcpy`ing objects anywhere, so they don't have a permanent address, and don't need to be allocated individually.
In this regard Rust is different from e.g. Swift and Go, which claim to have C-like speed, but will autobox objects for you.
Bulk operations are not really about layout, they are about whether you mentally consider each little data structure to be an individual entity with its own lifetime, or not, because this determines what the code looks like, which determines how fast it is. (Though layout does help with regard to cache hits and so forth).
I don't know what you're trying to imply that Rust does, but I'll reiterate that Rust lifetimes don't exist at code generation time. They're not a runtime construct, they have zero influence over what code does at run time (e.g. mrustc compiler doesn't implement lifetimes, but bootstraps the whole Rust compiler just fine).
If you create `Vec<Object>` in Rust, then all objects will be allocated and laid out together as one contiguous chunk of memory, same as `malloc(sizeof(struct object) * n)` in C. You can also use `[Object; N]` or ArrayVec that's is identical to `struct object arr[N]`. It's also possible to use memory pools/arenas.
And where possible, LLVM will autovectorize operations on these too. Even if you use an iterator that in source code looks like it's operating on individual elements.
Knowing your other work I guess you mean SoA vs AoS? Rust doesn't have built-in syntax for these, but neither does C that we're talking about here.
> They're not a runtime construct, they have zero influence over what code does at run time (e.g. mrustc compiler doesn't implement lifetimes, but bootstraps the whole Rust compiler just fine).
This kind of reasoning seems like it makes sense, but actually it is false. ("Modern C++" people make the same arguments when arguing that you should use "zero-cost abstractions" all over the place). Abstractions determine how people write code, and the way they write the code determines the performance of the code.
When you conceptualize a bunch of stuff as different objects with different lifetimes, you are going to write code treating stuff as different objects with different lifetimes. That is slow.
> If you create `Vec<Object>` in Rust, then all objects will be allocated and laid out together as one contiguous chunk of memory
Sure, and that covers a small percentage of the use cases I am talking about, but not most of them.
> When you conceptualize a bunch of stuff as different objects with different lifetimes, you are going to write code treating stuff as different objects with different lifetimes. That is slow.
This is not how lifetimes work at all. In fact this sounds like the sort of thing someone who has never read or written anything using lifetimes would say: even the most basic applications of lifetimes go beyond this.
Fundamentally, any particular lifetime variable (the 'a syntax) erases the distinctions between individual objects. Rust doesn't even have syntax for the lifetime of any individual object. Research in this area tends to use the term "region" rather than "lifetime" for this reason.
Lifetimes actually fit in quite nicely with the sorts of things programs do to optimize memory locality and allocations.
> Sure, and that covers a small percentage of the use cases I am talking about, but not most of them.
Fortunately the other stuff you are talking about works just fine in Rust as well.
Rust's flavor of RAII is different from C++'s, because Rust doesn't have constructors, operator new, implicit copy constructors, and doesn't expose moved-out-of state.
Rust also has "Copy" types which by definition can be trivially created and can't have destructors. Collections take advantage of that (e.g. dropping an array doesn't run any code).
So I don't really get what you mean. Rust's RAII can be compiled to plain C code (in fact, mrustc does exactly that). It's just `struct Foo foo = {}` followed by optional user-defined `bye_bye(&foo)` after its last use (note: it's not free/delete, memory allocator doesn't have to be involved at all).
I suspect you're talking about some wider programming patterns and best practices, but I don't see how that relates to C. If you don't need per-object init()/deinit(), then for the same you wouldn't use RAII in Rust either. RAII is an opt-in pattern.
RAII is completely orthogonal to lifetimes, for one thing. You can have either without the other.
But, I am familiar with the kind of thing you're complaining about here, and frankly the mere existence of RAII is not its cause. Working with a large dataset, managing allocation/layout/traversal in a holistic way, you just... don't write destructors for every tiny piece. It works fine, I do it all the time (in both Rust and C++).
You haven't really explained in any detail what is slow about "treating stuff as objects with different lifetimes", and specifically how Rust differs there from C. Can you give an example?
Maybe you'd be interested to hear that Rust's borrow checker is very friendly to the ECS pattern, and works with ECS much better than with the classic OOP "Player extends Entity" approach.
As I've pointed out in the article, Rust does give you precise control over memory layout. Heap allocations are explicit and optional. In safe code. You don't even need to avoid any nice features (e.g. closures and iterators can be entirely on stack, no allocations needed).
Move semantics enables `memcpy`ing objects anywhere, so they don't have a permanent address, and don't need to be allocated individually.
In this regard Rust is different from e.g. Swift and Go, which claim to have C-like speed, but will autobox objects for you.