Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Get rid of the requirement that there is only one stable (minor) version of a package in the distribution at one time.

I think this requirement made sense when disk space was scarce.

I think this requirement makes sense if you trust that your distro is always better at choosing the 'best' version of a dependency that some software should use than the software author.

Nowadays, I think neither is generally true. Disk is plentiful, distro packages are almost always far more out of date than the software's original source, and allowing authors to ship software with specific pinned dependency versions reduces bugs caused by dependency changes and makes providing support for software (and especially reproducing end-user's issues) significantly easier.

Isolating dependencies by application, with linking to avoid outright duplication of identical versions (a la pnpm's approach for JS: https://pnpm.io/) is the way to go I think. Honestly, it feels like the way it's already gone, and it's just that the distros are fighting it tooth & nail whilst that approach takes over regardless.



> a la pnpm's approach for JS: https://pnpm.io/

Ah JS, how many days has it been since the last weekly "compromised npm package infecting everything" problem? If you are upholding that as gold standard you have to be the worlds laziest black hat.

> Disk is plentiful,

I recently had to install a chrome snap because it is the new IE6 and everyone is all over chrome exclusive APIs as if they were the new ActiveX. Over a gigabyte of dependencies for one application and given the trend of browser based desktop applications? I would like to have space left for my data after installing the programs I need for work.


It's not just about disk space, though.

Distros assume responsibility for fixing major bugs and security vulnerabilities in the packages they ship. Old versions often contain bugs and vulnerabilities that new versions don't. Distros have two choices here: either ship the new version and remove the old version, or backport the fix to the old version.

Continuing to ship the old version without the fix is not an option -- even if you also ship the new version -- because some programs will inevitably use the old version and then the distro will be on the hook for any resulting hacks. Backporting every fix to every version that ever shipped is also not a realistic project.

Here in the startup world we often forget that there's a whole other market where many people would gladly accept 3-year-old versions in exchange for a guarantee of security fixes for 5-10 years. Someone needs to cater to this market, and the (non-rolling) distros perform that thankless task because individual developers won't.


> Distros assume responsibility for fixing major bugs and security vulnerabilities in the packages they ship.

I think they should just ship Python programs, not libraries. They could check what libraries given Python program uses are safe in the version that it uses them.

And just don't care if each of Python programs has a separate copy of the libraries or if particular version of particular library is shared between Python programs by Python environment.

Distributions might just give up responsibility for sharing Python packages between Python programs without giving up the responsibility for security of those programs.


So, patch two dozen copies of slightly different versions of the same library included in all those programs, instead of patching just one?


Why not? Its cheap resource wise whereas dependency hell is potentially debilitating. For some reason many proponents of the package management status quo are blind to this. Having multiple versions of a dependency is only bad in so far as its "messy". It isn't objectively bad. But having a system that breaks applications because two or more cant agree on a package version is objectively bad. Its arguing aesthetics versus getting the job done. A poor position.

Windows, for all its faults, doesn't have this problem. It will happily accommodate multiple versions of say, .Net as needed.


Disk is cheap. RAM is cheap. Man-hours are not. Distros are maintained by people, who are often volunteers. You are asking them to do extra work (i.e. porting the same patch to multiple versions of the same library) so that someone else can have it easy. But why should they? Why not the other way around?

It's not just aesthetics. If a new vulnerability is found in, say, libjpeg, then every Windows app that uses libjpeg needs to be updated separately. Tough luck if your favorite image manipulation tool takes a while to catch up. On the Linux side of the fence, the distro patches libjpeg and every app is automatically fixed. This is a huge win in terms of protecting the user. Why should we give up that benefit just because some developer wants to use his own special snowflake version of a library?


Not managing that would require less work not more. Their position is making more work for themselves. The point was to prevent dependency hell through matching the wrong package versions. Something that can occasionally happen in Windows. The problem is, the current form of management causes the far worse form of dependency hell of applications requiring conflicting versions.

I have maybe had Windows get confused about dependency versions twice ever and both times it was a driver inf for a virtual device. I will grant that fixing the problem required a fair amount of work by Windows standards but frankly not all that much by the standards of some of the more hands on distros.

I have had Linux tell me I can't install an application because it wanted a different version of Lib-whatever than what something else wants many many times.

"Why should we give up that benefit just because some developer wants to use his own special snowflake version of a library?"

Odd that you claim major distros are built by a small group of volunteers but the maintainers of much smaller and less well supported applications need to suck it up and use whatever version the Distro maintainers decide on.

Most major distros are not volunteer run and haven't been for ages. Ubuntu, RHEL, SUSE, POP, the list goes on. These are commercial products with full time paid developers. In the case of Ubuntu they are providing a major chunk of the work back upstream to Debian and in the case of RHEL they are the upstream. Most minor distros are downstream benefactors of the big players.

Contrary to that its still common for many FOSS apps and utilities to be one man jobs. Maybe the guy doesn't have the resources to keep up with the break neck pace of some update cycles. What if they decided to go with an LTS build intentionally? What if its a simple package that doesn't have security issues yet gets updated for other reasons? What if the version they are using has core functionality that was EOL'd in a newer release so they can't move on without major rework that they can't manage?

There are a million reasons for why a project may want to stick with an older version. Also allowing for the ability to update all packages does not require draconian control of which packages can be installed. This notion runs against the whole notion of user control. If the user wants multiple concurrent versions on their system who are you to say they can't? FOSS means freedom.


You should patch two dozen python programs as a whole that use vulnerable version of libraries. Treat Python programs as if each of them was a single executable file and you only have information which versions of libraries it has inside it. And if any of those is known to be vulnerable treat whole program as a security threat.


> Disk is plentiful,

What makes you think so? SSDs aren't exactly stellar in the cost-per-TB department, as will be the case with each new higher-performance storage technology. Plenty of people cannot afford the prices of new Western tech either, what about them?


> SSDs aren't exactly stellar in the cost-per-TB department

First of all, 1TB for binaries and libraries may as well be infinite. Secondly, you can get a 1TB SSD for under $100, which is pretty damned inexpensive when you consider it took until 2009 to get HDDs that affordable.


On a desktop, sure. Now that more and more laptops are starting to have soldered-down storage, this argument falls apart.


If you have a laptop with only half a gig soldered in you get what you pay for.

It's almost as bad as complaining about having trouble running modern stuff with your 286


It's plentiful relative to the size of compiled or source code. E.g the biggest .so file on this system right now is a <150MB libxul.so. That's only used by one piece of software anyway, and the drop-off is pretty steep after that. A 64GB drive (tiny these days) can fit more than four hundred of that unusually large file.


> cost-per-TB

There you have it: You measure in TB, not gigabytes, not megabytes.

Python packages are megabytes.


Not if they pull in all of their dependencies, PyQt would have a complete copy of all Qt binaries and a complete chrome install because of course Qt includes a browser based html viewer. Python packages are gigabytes.


What distro is pulling PyQt as a dependency of Python? There is a difference between "dependency" and "every package which has the word python in the description".


PyQt only contains the bindings. You share the same Qt environment across your system (hence qmake needs to be in your path). The python package itself is not that big (~10 MB).


The comment on top of this chain is about letting every package specify its own versioned dependencies. So how would that global version work out when python needs 5.1 and some other software specifies 5.2?


Qt minor releases are backwards binary and source compatible, but not forward. Therefore this would not be an issue.


That guarantee only applies to Qt itself, I would expect that the newer Qt binary was also compiled against all the other newest versions of its own dependencies. Good luck finding a backwards compatibility promise for all of them.


Fortunately there are a thousand gigabytes in a terabyte and nobody is using a system with single or double digit gigs of storage anymore.

Unless you are trying to save a buck it seems 1TB is the standard today. My primary desktop has 4.


> Unless you are trying to save a buck it seems 1TB is the standard today.

I suppose part of the problem is that while getting a 1 TB SSD instead of a 512 GB or even 256 GB one may not be overly expensive (for a middle-class person in a wealthy country anyway), due to the way OEM laptop product lines are often stratified, you may need to either buy your 1 TB SSD separately or get an altogether higher-specced model than your perhaps otherwise would. The latter especially isn't cheap.

There might be some customization options but sometimes little customization is available. That's probably one of the ways people end up with relatively small-capacity SSDs.

It's kind of similar as with RAM: a higher capacity isn't that much more expensive in theory, but in practice it may be.

This is irrelevant for custom-build desktops but lots of people are running only laptops nowadays. I'd like to see better customizability for the builds, as well as upgradability and replaceability, but the options are often limited.


> Unless you are trying to save a buck it seems 1TB is the standard today.

Buying a 1TB external SSD would more than double the cost of a raspberry pi 4 that and my ancient beagle board does fine running from 32 GB.

> My primary desktop has 4.

Those are rookie numbers for a primary system. Of course my Office system next to it is a lot lower specked with the test system next to it even lower.


Embedded systems really shouldn't be brought into play here but even then a 256 GB uSD card for the Pi is $25 dollars and itself far overkill. My entire primary desktop OS, firmware, DEs, and very extensive package set fits in 15 GB. Multiplying my primary system by 10 and sticking it on a Pi taking $25 of storage with plenty to spare is still not an argument against binary sizes, especially since there are niche distro spins used for that niche space anyways.


Eh I think embedded systems are a different class. I know some people run RPis as their desktops but they are definitely the minority.

>Those are rookie numbers for a primary system

4TB suits my needs and use case fine. And tha'ts not counting my NAS.


The eMMC on a Beaglebone Black is 4GB. Sure you can boot off an SD card but that's less robust (though, I guess you can use the SD card for all your virtualenvs...).


A typical virtualenv for a project I work on is > 500 MB. That adds up.


Text (eg python source code) is really quite small. Far too small to count unless you're a distro specifically targeting low-end machines.


> I think this requirement made sense when disk space was scarce.

No, the main reason is security. I need a distro to guarantee me that the library that I use are going to stay the same for the next 3 to 5 years, while also receiving small targeted patches.

> I think this requirement makes sense if you trust that your distro is always better at choosing the 'best' version of a dependency that some software should use than the software author.

No, it just has to be better at choosing version than picking them randomly using pip.

Furthermore, when thousands of developers use the same combination of libraries from a distro the stack receives a ton of testing.

The same cannot be said when each developer pick a random set of versions.


One case where disk space is somewhat scarce is on shared academic computing clusters (which often provide many versions of things via some module system, but your $HOME can have a quota that's just 30GB).




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: