Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why is this still a discussion?

> was no room for a change in plan

yes, pretty much

at least the questions about it breaking unofficial distros, mostly related to some long term discontinued architectures, should never affect how a Distro focused on current desktop and server usage develops.

if you have worries/problems outside of unsupported things breaking then it should be obvious that you can discuss them, that is what the mailing list is for, that is why you announce intend beforehand instead of putting things in the change log

> complained that Klode's wording was unpleasant and that the approach was confrontational

its mostly just very direct communication, in a professional setting that is preferable IMHO, I have seen too much time wasted due to misunderstandings due to people not saying things directly out of fear to offend someone

through he still could have done better

> also questioned the claim that Rust was necessary to achieve the stronger approach to unit testing that Klode mentioned:

given the focus on Sequoia in the mail, my interpretation was that this is less about writing unit tests, and more about using some AFIK very well tested dependencies, but even when it comes to writing code out of experience the ease with which you can write tests hugely affects how much it's done, rust makes it very easy and convenient to unit test everything all the time. That is if we speak about unit tests, other tests are still nice but not quite at the same level of convenience.

> "currently has problems with rebuilding packages of types that systematically use static linking"

that seems like a _huge_ issue even outside of rust, no reliable Linux distros should have problems with reliable rebuilding things after security fixes, no matter how it's linked

if I where to guess there this might be related to how the lower levels of dependency management on Linux is quite a mess due to requirements from 90 no longer relevant today, but which some people still obsess over.

To elaborate (sorry for the wall of text) you can _roughly_ fit all dependencies of a application (app) into 3 categories:

1. programs the system provides (opt.) called by the app (e.g. over ipc, or spawning a sub process), communicating over well defined non language specific protocols. E.g. most cmd-line tools, or you systems file picker/explorer should be invoked like that (that it often isn't is a huge annoyance).

2. programs the system needs to provide, called using a programming language ABI (Application Binary Interface, i.e. mostly C ABI, can have platform dependent layout/encoding)

3. code reused to not rewrite everything all the time, e.g. hash maps, algorithms etc.

The messy part in Linux is that for historic reasons the later two parts where not treated differently even through they have _very_ different properties wrt. the software live cycle. For the last category they are for your code and specific use case only! The supported versions usable with your program are often far more limited: Breaking changes far more normal; LTO is often desirable or even needed; Other programs needing different incompatible versions is the norm; Even versions with security vulnerabilities can be fine _iff_ the vulnerabilities are on code paths not used by your application; etc. The fact that Linux has a long history of treating them the same is IMHO a huge fuck up.

It made sense in the 90th. It doesn't anymore since ~20 years.

It's just completely in conflict with how software development works in practice and this has put a huge amount of strain on OSS maintainers, due to stuff like distros shipping incompatible versions potentially by (even incorrectly) patching your code.... and end users blaming you for it.

IMHO Linux should have a way to handle such application specific dependencies in a all cases from scripting dependencies (e.g. python), over shared object to static linking (which doesn't need any special handling outside of the build tooling).

People have estimated the storage size difference of linking everything statically, and AFIK it's irrelevant in relation to availability and pricing on modern systems.

And the argument that you might want to use a patched version of a dependency "for security" reasons fails if we consider that this has lead to security incidents more then one time. Most software isn't developed to support this at all and bugs can be subtle and bad to a point of a RCE.

And yes there are special cases, and gray areas in between this categories.

E.g. dependencies in the 3rd category you want to be able to update independently, or dependencies from the 2nd which are often handled like the 3rd for various piratical reasons etc.

Anyway coming back the the article Rust can handle dynamic linking just fine, but only for C ABI as of now. And while rust might get some form of RustABI to make dynamic linking better it will _never_ handle it for arbitrary libraries, as that is neither desirable nor technical possible.

---

EDIT: Just for context, in case of C you also have to rebuild all header only libraries using pre-processor macros, not doing so is risky as you now mix different versions of the same software in one build. Same (somewhat) for C++ with anything using template libraries. The way you can speed it up is by caching intermediate build artifacts, that works for rust, too.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: