> The main reason I use Linux is because it's the only platform that actually respects me as user.
Exactly this. And one things proprietary software nowadays sucks big is being economical with my attention and not wasting it. Imagine this: One day, I arrived at work, logged in to the Windows box, opened the web browser to read new mail messages about the thing we were furiously and deeply working and, and what I got next was some in-browser news advertisements from MSN about sexually abused teenagers. You can imagine how hard I cursed. Try to place your boss some yellow press front page over the keyboard while he/she is having their morning coffee, and see what happens.
> While some Linux purists dislike containerized application installation programs such as Flatpak, Snap, and AppImage, developers love them. Why? They make it simple to write applications for Linux that don't need to be tuned just right for all the numerous Linux distributions.
The good thing is that for end users, Guix and Nix (as package managers) do cover exactly the same set of features - but both are much friendlier to developers than containerized apps. And of course they are truly FLOSS and "open source" in that stuff is built from source and that sources are readily available to the users. This matters, since this makes the software more friendly: It is user-friendly because it is written by the users, in difference to a party which has other things as their top priority.
> Genuinely interested: what do people think of their tons of third-party dependencies in Rust?
Here are three things I think, and they have in fact nothing to do with Rust:
1. The easier it is to add dependencies, the more dependencies will be added on average - unless you work purposefully against that.
2. The effect of a rising average number of dependencies in libraries is that their number of dependencies grows as well, and the number of their dependencies' dependencies... up to dependency graphs of several hundred nodes size. Like in exponential growth. An example would be the dependency graph of jquery.
3. I observe this "exponential" growth can have chain-reaction-like effects, like if you have a mass of U235 that achieves critical mass. Below that critical value, some neutrons flying around might trigger a few fissions, but these die out. Above that value, neutrons lead to fissions which lead to more neutrons and so on. The same can happen with complexity in multi-component software. At some point, complexity goes through the roof.
And the latter is especially true if backwards compatibility is not strictly observed, since backwards-incompatible changes tend to be infectuous in that they often make their client components (parents in the dependency graph) backwards-incompatible as well, in other words, there is breakage that propagates up the dependency graph. That breakage might die out and be able to be contained by local fixes, or it might propagate. And once your dependency graph becomes large enough, it is almost guaranteed, that you have breakage.
All these things together is why I believe that systems like NixOS or Guix are the future (but of course there might be other developments in that space).
> All these things together is why I believe that systems like NixOS or Guix are the future (but of course there might be other developments in that space).
That, or actually keeping control over the dependencies?
I think it is always wise to constrain unneeded dependencies. And this counts even more in embedded systems.
But on the other hand, programming artefacts, languages and their library systems compete in terms of features, which will usually lead to an ever growing number of dependencies. At least this is what we observe. Even if not all of this is really necessary, this would probably be hard to reverse in general.
> Which tbh is a bad thing. Just because change a doesn't textually touch change b doesn't mean they don't interact.
A good example for this is code which grabs several locks and different functions have to do that in the same order, or a deadlock will result. A lot of interaction, even if changes might happen in completely different lines.
And I think that's generally true for complex software. Of course it is great if the compiler can prove that there are no data race conditions, but there will always be abstract invariants which have to be met by the changed code. In very complex code, it is essential to be able to do bisecting, and I think that works only if you have a defined linear order of changes in your artefact. Looking at the graphs of changes can only help to understand why some breakage happened, it cannot prevent it.
> 1) git will not accept the push because it's not on top of current master branch, person B needs to fetch and merge/rebase before pushing again.
But is this not the right thing to do? A kernel is a complex piece of software. Changes in one place can have very non-obvious consequences in other places (think of changes that cause deadlock because locks are applied in the wrong order). Of course, it is theoretically nice if I know that a change to e.g. documentation or fixing a typo in a comment is not affecting the Ethernet driver or the virtual file system layer, but this is down to the architecture of the project - this is not something that a version control system can prove.
Given that, it seems desirable to me that the source tree has as few different variations, permutations how to get there, and so on, as possible, since this makes testing and things like bisecting for something like a broken lock or another invariant much easier.
I worked for some time at an industrial/embedded company, where in order to build all the software, you had to select "build all" in a menu, and it built everything - more than four million lines of code.
It was a build system which was a pure pleasure to work with, last not least I think because it did not try to solve problems which turn out intractable in the general case.
They are not going to have these supply-chain issues.