I think governments funding software development could be a useful counterweight in an industry dominated by a few giant corporations, similar to how lots of countries have state funded media alongside commercial options.
But the EU forking Android is not a remotely realistic starting point. How do you persuade manufacturers to use it? Would Google license its proprietary apps to run on it? How will the small team of devs cope with whatever changes are coming in hardware next year? Forking Android is easy, making your fork a viable alternative is almost impossible.
In theory the EU could throw its weight around and demand that Google & OEMs work with 'EUdroid' if they want to sell phones in Europe. But that would be a massive political fight, much bigger than funding a few developers.
And those apps get developed only if there are enough users. Catch 22.
Microsoft didn't manage to make Windows Phone a viable competitor against Android & iOS, and they're about an order of magnitude bigger than any Linux-focused company. I hope the conditions shift and an open phone OS can take off, but I don't know what would enable it.
> As for religious wars over init systems, desktop environments and package managers, competition is making the options stronger, not weaker.
Competition can definitely improve things, but it's not universally positive. In particular, endless competition in parts of the operating system makes it hard to build anything on top of them. E.g. if you want to distribute an application for Linux, do you build a Flatpak, or a Snap? Or take a more traditionalist approach and make RPMs, DEBs, etc.? You either pick your favourite and leave out a large fraction of Linux users who disagree, or you have to do more than one of these. This is definitely a drag on the ecosystem.
I agree that most users don't care about the OS, though.
Generally most Linux distributions are literally the same thing underneath. I have recently done an LFS build (using version 12.3 of the book). The same files were in the same directories in Debian, Arch and LFS for the most part.
I even had a look at source code for makepkgs in Arch and they are literally the same commands in the script that the book has you manually type in.
The packaging critique comes up over the years but it is a bit of an overblown.
Building packages for different distributions isn't super difficult. I've build Arch Packages using the ABS, DEBS and RPMS and they are all conceptually the same. Having a quick skim of the Flatpak instructions it doesn't look that different either.
If you don't want to bother with all of that. You can just have a script that drops the installation either in /opt or ~./.local/. I am pretty sure Jetbrains Toolbox does that, but I would need to check my other machine to confirm and I don't have access currently.
If you build an application, The Right Way™ has always, and probably always will be a tarball. Leave to distributions the hassle to distribute your software.
This is absolutely not a solution. It more or less works for a few big widely used applications like Firefox & Libreoffice, but more niche applications can't realistically get someone in every major Linux distro interested in packaging them. And if someone does take an interest once, there's no guarantee they stay interested to package updates. Distro maintainers need to be highly trusted (give or take AUR), so it's not easy to add many more of them.
On top of that, some of the biggest Linux distros will only release application updates with their own OS releases, on a cadence of months or years. Software developers expect to be able to deliver updates in hours to days (at least in the consumer space - highly regulated things like banking are different).
There are good reasons why RedHat and Canonical, the companies behind some of the biggest Linux distros, are pushing app distribution systems (Flatpak & Snap) which aim to be cross-distro and have upstream developers directly involved in packaging. There are absolutely downsides to this approach as well, but it would be nice if we could stop pretending that traditional distro packaging is a marvellous system.
This is a complete and utter non-starter for most software developers.
On pretty much every other operating system out there, I as the application author have control over the release cadence of my software. On some platforms, I can simply release an installer and be done. On others, I send the app through some form of certification process, and when it's complete it appears on the App Store.
Linux is rather unique in that it has tasked the distro maintainers to be tastemakers and packagers of the software that appears in their repositories. They control what version is used, what libraries it's linked against and their specific versions, how often it's updated, which additional features are added or removed before distribution, et cetera.
From the outside, this is complete and utter insanity. In the old days, you had to either static link the universe and pray or create a third-party repository per specific distro. Thank goodness these days we have numerous ways to cut distros out of the loop - Docker, Flatpak, AppImage and of course Steam if you're a gamer.
> Its not about competition. RedHat employees pushed their idea of things and the volunteers either ate it up or left.
This is a narrow view about how innovation happens in Linux and related software. Yes, Linux-focused companies are driving many of the changes, but there is plenty of exploration of ideas that happens outside of those companies.
I was thinking about maintaining and keeping things running (like you would do with cars and houses and anything else except software) and less about innovation and change. I doubt there is a shortage of ideas that are being explored.
Like, innovative and fresh stuff is cool, but at the end of the day you need to keep your business running and not breaking down.
Dunno about not caring about the OS. My mum who's not techy got persuaded to get a Macbook after Windows and is finding it a big learning curve. I remember when Walmart sold Linux machines they gave up because the buyers returned them when they found their Windows stuff didn't run. I'm a fairly normal user and certainly care if it's Mac, Windows or Linux. I wouldn't run linux as my main os as I spend a lot of time in Excel.
People definitely care about applications they use a lot, and MS Office is a big one - even if LibreOffice would work just as well for many use cases, people are hesitant to give up what they know works.
The OS does make some difference, but I think that if all the same applications were available, a lot of people could switch without much difficulty. In some ways going from Windows to Linux might be easier than Windows to Mac, because many Linux distros are happy enough to conform to UI conventions that are familiar from Windows.
AppImage is, in fairness, a clever idea. But it's also yet another option. It only solves the mess of competing formats if it wins, if it becomes the normal way to publish user-facing software on Linux.
I don't think Stallman is an effective spokesperson or campaigner for his own cause, though. Corporate-friendly open source has got enormously popular, to the point where the biggest open source collaboration platform, Github, is owned by Microsoft. Stallman is not troubling them. It's his own side he's driving to irrelevance.
There was a time getting bought up by a large company seems like a great success and exit strategy. Now days the only things that I want spend my time making are things that are useful for people around me, not things that are useful for industrial military and surveillance state.
Definitely, yes. That's a prime example of how corporate friendly open source has been massively successful. By contrast, the first entry on the FSF's high priority projects page is a free phone operating system. They point to Replicant, an Android fork that... does not look particularly active.
There are so many ways one could work around this (apparent) limitation. Liberty software, unbound software, modifiable software. Go all in on libre rather than putting it in an awkward 'free/libre' combo - languages borrow words from each other all the time. Swap the order round and talk about software freedom, or digital freedom. Make a portmanteau like libreware...
I'm not especially good at this, and obviously 'free software' has the benefit of a few decades history among the people who actually know it. But almost anything seems better than a phrase which has a very obvious meaning that's not the one you meant, and the consequent need for fussy little explanations. Especially when most Free Software is also free software.
Alas this is something many have been debating for decades at this point. Unfortunately, there isn't a really clear answer. Both sides have good and bad points.
That seems to be looking at tracking and data collection libraries, though, for things like advertising and crash reporting. I don't see any mention of the kind of 'network sharing' libraries that this article is about. Have I missed it?
I wonder if part of the "no big deal" view comes from the difference between tech companies as they're most visible now - 'move fast and break things' - versus all the other industries that rely on computers.
It's easy to imagine a lot of tech companies today taking a 'fix on failure' approach, because they're built on the idea that you deploy changes quickly, so they can accept a higher risk of faults. It's harder to make changes in banks, airlines & power stations, thus they're more risk averse, and far more likely to invest resources in the kind of effort the author describes to find & fix issues in advance.
Conda package recipes have a preprocessor using "selectors" in comments to conditionally exclude certain lines in a YAML file. Not that this is particularly more ugly than other YAML DSLs, but it has been used.
I believe you that token uploads will continue to be possible, but it seems likely that in a couple of years trusted publishing & attestations will be effectively required for all but the tiniest project. You'll get issues and PRs to publish this way, and either you accept them, or you have to repeatedly justify what you've got against security.
And maybe that's a good thing? I'm not against security, and supply chain attacks are real. But it's still kind of sad that the amazing machines we all own are more and more just portals to the 'trusted' corporate clouds. And I think there are things that could be done to improve security with local uploads, but all the effort seems to go into the cloud path.
> I believe you that token uploads will continue to be possible, but it seems likely that in a couple of years trusted publishing & attestations will be effectively required for all but the tiniest project.
That's what I think will happen.
> And maybe that's a good thing? I'm not against security, and supply chain attacks are real.
The problem is the attestation is only for part of the supply chain. You can say "this artifact was built with GitHub Actions" and that's it.
If I'm using Gitea and Drone or self-hosted GitLab, I'm not going to get trusted publisher attestations even though I stick to best practices everywhere.
Contrast that with someone that runs as admin on the same PC they use for pirating software, has a passwordless GPG key that signs all their commits, and pushes to GitHub (Actions) for builds and deployments. That person will have more "verified" badges than me and, because of that, would out-compete me if we had similar looking projects.
The point being that knowing how part of the supply chain works isn't sufficient. Security considerations need to start the second your finger touches the power button on your PC. The build tool at the end of the development process is the tip of the iceberg and shouldn't be relied on as a primary indicator of trust. It can definitely be part of it, but only a small part IMO.
The only way a trusted publisher (aka platform) can reliably attest to the security of the supply chain is if they have complete control over your development environment which would include a boot-locked PC without admin rights, forced MFA with a trustworthy (aka their) authenticator, and development happening 100% on their cloud platform or with tools that come off a safe-list.
Even if everyone gets onboard with that idea it's not going to stop bad actors. It'll be exactly the same as bad actors setting up companies and buying EV code signing certificates. Anyone with enough money to buy into the platform will immediately be viewed with a baseline of trust that isn't justified.
As I understand it, the point of these attestations is that you can see what goes into a build on GitHub - if you look at the recorded commit on the recorded repo, you can be confident that the packages are made from that (unless your threat model is GitHub itself doing a supply chain attack). And the flip side of that is that if attestations become the norm, it's harder to slip malicious code into a package without it being noticed.
That's not everything, but it is a pretty big step. I don't love the way it reinforces dependence on a few big platforms, but I also don't have a great alternative to suggest.
Yeah, if the commit record acts like an audit log I think there’s a lot of value. I wonder how hard it is to get the exact environment used to build an artifact.
I’m a big fan of this style [1] of building base containers and think that keeping the container where you’ve stacked 4 layers (up to resources) makes sense. Call it a build container and keep it forever.
Thank you for being the first person to make a non-conspiratorial argument here! I agree with your estimation: PyPI is not going to mandate this, but it’s possible that there will be social pressure from individual package consumers to adopt attestations.
This is an unfortunate double effect, and one that I’m aware of. That’s why the emphasis has been on enabling them by default for as many people as possible.
I also agree about the need for a local/self-hosted story. We’ve been thinking about how to enable similar attestations with email and domain identities, since PyPI does or could have the ability to verify both.
If there is time for someone to work on local uploads, a good starting point would be a nicer workflow for uploading with 2FA. At present you either have to store a long lived token somewhere to use for many uploads, and risk that it is stolen, or fiddle about creating & then removing a token to use for each release.
> or you have to repeatedly justify what you've got against security.
The only reason I started using PyPI was because I had a package on my website that someone else uploaded to PyPI, and I started getting support questions about it. The person did transfer control over to me - he was just trying to be helpful.
I stopped caring about PyPI with the 2FA requirement since I only have one device - my laptop - while they seem to expect that everyone is willing to buy a hardware device or has a smartphone, and I frankly don't care enough to figure it out since I didn't want to be there in the first place and no one paid me enough to care.
Which means there is a security issue whenever I make a new package available only on my website should someone decide to upload it to PyPI, perhaps along with a certain something extra, since people seem to think PyPI is authoritative and doesn't need checking.
The 2FA requirement doesn't need a smartphone. You can generate the same one time passwords on a laptop. I know Bitwarden has this functionality, and there are other apps out there if that's not your cup of tea. Sorry that you feel pressured, but it is significantly easier to express a dependency on a package if it's on PyPI than a download on your own site.
Sure. But PyPI provides zero details on the process, I don't use 2FA for anything else in my life, no one is paying me to care, I find making PyPI releases tedious because I inevitably make mistakes in my release process, I have a strong aversion to centralization and dependencies[1][2].
I tell people to "pip install -i $MY_SITE $MY_PACKAGE". I can tell from my download logs that this is open to dependency confusion attacks as I can see all the 404s from attempts to, for example, install NumPy from my server. To be clear, the switch to 2FA was only the triggering straw - I was already migrating my packages off of PyPI.
Finally, I sell a source license for a commercial product (which is not the one which got me started with PyPI). My customers install it via their internally-hosted PyPI mirrors.
I provide a binary package with a license manager for evaluation purposes, and as a marketing promotion. As such, I really want them to come to my web site, see the documentation and licensing options, and contact me. I think making it easier to express as a dependency via PyPI does not help my sales, and actually believe the extra intermediation likely hinders my sales.
[1] I dislike dependencies so much that I figured out how to make a PEP 517 compatible version that doesn't need to contact PyPI simply to install a local package. Clearly I will not become a Rust developer.
[2] PyPI support depends on GitHub issues. I regard Microsoft as a deeply immoral company, and a threat to personal and national data sovereignty, which means I will not sign up for a GitHub account. When MS provides IT support for the upcoming forced mass deportations, I will have already walked away from Omelas.
In short, use "backend-path" to include a subdirectory which contains your local copies of setuptools, wheel, etc. Create a file with the build hooks appropriate for "backend-path". Have that those hooks import the actual hooks in setuptools. Finally, set your "requires" to [].
Doing this means taking on a support burden of maintaining setuptools, wheels, etc. yourself. You'll also need to include their copyright statements in any distribution, even though the installed code doesn't use them.
(As I recall, that "etc" is hiding some effort to track down and install the full list of packages dragged in, but right now I don't have ready access to that code base.)
I think there's been some concern that people compare the speed of Firefox with a bunch of random extensions against Chrome with none. I wonder if the 'refresh' is an attempt to get away from that at the point where someone might be giving Firefox another chance.
I don't remember if the refresh offer appeared back in the xul days, when there was much more scope for extensions to hurt the experience.
that seems to me like a bad faith interpretation of what I wrote—I'm using "bloat" to mean all additional changes beyond the vanilla ff installation
re: "ridiculous", do you think it's likely that there's a deep conspiracy within mozilla (at the behest of google) to make users watch ads? or were you hoping that they'd write a whitelist of adblockers that should never be removed, even when wiping the browser?
By far the biggest hog of resources for me is Dark Reader. I don’t think any other extension even comes close. I still use it though because adding 300ms to a page load to me doesn’t phase me.
I only mention this because the effect of those things could be more effectively curtailed... if the users weren't encouraged to hide the shortcomings from the application.
Sweeping things under the rug, you know?
A browser should have history, data, and extensions. Getting rid of those things doesn't qualify as a fix. It's a diagnostic step!
But the EU forking Android is not a remotely realistic starting point. How do you persuade manufacturers to use it? Would Google license its proprietary apps to run on it? How will the small team of devs cope with whatever changes are coming in hardware next year? Forking Android is easy, making your fork a viable alternative is almost impossible.
In theory the EU could throw its weight around and demand that Google & OEMs work with 'EUdroid' if they want to sell phones in Europe. But that would be a massive political fight, much bigger than funding a few developers.
reply