"Linux is fragmented", and then goes on to list Free, Open, Net, Dragonfly and avoids listing a few more. "Poettering creepware". Web developers who use Google Analytics are "stupid".
I really don’t get the weird cult of people seemingly obsessed with Lennart. The man sets out to solve really hard problems in the Linux desktop space, isn’t afraid of tackling complexity when it shows up in the problem domain, and is obsessed with correctness. That last one gets him in trouble from people who want him to be more pragmatic but I personally really appreciate it. I really admire him for being the poster child for knowing when to say no.
He pumps out really high quality software (seriously, read some of his source code it’s really easy to hack on), is genuinely innovative, and all people seem to do is shit on him and make him out like he’s some Linux daemon laughing manically every time a sysadmin gets frustrated.
Anyone who uses the phrase “potteringware” please listen to him talk at any conference he speaks at like All Systems Go. He really is down to earth and really thinks through problems before tackling them.
Agreed. I think the hate against him stems from him just completely dismissing POSIX and the unix philosophy. Personally, I think this is a positive direction to move in. Let's stop worrying about the design principles that happened to underlie one of the operating systems that got popular. Instead, let's keep innovating and thinking of better ways to do things. That's what Poettering is doing.
One thing I’ve never understood is how network-manager relates to all this brokenness: on every single Linux distribution I’ve run, I’ve had to hack around network-manager to avoid random broken stuff. Similarly, whatever was wrong with ifconfig and friends? ip might be “better”, but its documentation has been horrible and it obsoletes my 20 or so years of experience with Linux networking for no real gains
Correct. And there are other people working on those projects too, even if we assume Lennart is that good. I found a fairly trivial DoS issue in systemd-resolved in ~10min with a fuzzer. (Years ago, fixed already) That shouldn't happen in a system obsessed with correctness.
I noticed that, too: the author doesn't mention the many-splendored versions of *BSD. My personal favorite is FreeBSD.
I like Linux, but I have mixed feelings about systemd, especially when I discovered that systemd runs its own DNS server, which prevents the DNS server I wrote from binding to port 53 using INADDR_ANY. Instead, I had to write code to enumerate the interfaces & corresponding IP addresses & bind to each one separately. Systemd's decision to run their own DNS server is a bit of overreach, IMHO.
You don't have to use systemd-resolved. Most distros have been using systemd but not systemd-resolved for years. Fedora only just started using systemd-resolved by default in the most recent release.
It's like, yeah, that's a funny thing to also live in systemd.git, but systemd-resolved is functionally a separate product from systemd.
systemd isn't an init system and doesn't pretend to be one [note] - it's OS middleware with ambitions to becoming a distro unto itself. The systemd project assumes responsibility for numerous aspects of the system that have no technical justification for being under the same project umbrella. You might like systemd-homed, for instance, but it doesn't really have anything to do with the functions of init. Calling it an "init system" is like calling KDE a "window manager".
[note] It describes itself as "a suite of basic building blocks for a Linux system" - although in my opinion that implies more modularity than it posesses.
I just can't get myself worked up over systemd. Every in-principle objection I have to systemd also applies to the Linux kernel itself—but orders of magnitude "worse."
Principles aside, in reality, systemd has been a complete non-issue for me and a massive quality of life improvement for Linux as a desktop operating system.
That’s kind of where I am. I have yet to run into issues with systemd in practice. I suppose I did run into two but they both came down to old style init scripts that didn’t work right and so systemd couldn’t make sense of them.
Is it not possible to turn off things like systemd’s DNS server, etc? I imagine if you care about such things you should be able to reconfigure your OS to use something else instead.
I run it in practice but I don't trust it enough to run critical services under it (instead I use runit started from systemd) and it does have to much overreach which bit me several times. For instance setting the number of open files and other resource limits. Let's just say I'd rather use runit or even openrc instead.
It is possible to disable stuff like systemd-resolved or systemd-timesyncd but it takes extra work which should be done by installing another package that accomplishes the same task (unbiound, chrony) just like as one MTA replaces another on Debian.
I was careful not to include any value judgements in my comment. I simply wanted to lay to rest this "init system" business.
However, you ask a good question. Lots of people do like it, supposedly. But I think there's a large clue to its controversial nature in that people still call it an "init system", and pointing out that it isn't is interpreted by some as an attack.
Basically, it started out as an init system, which was used by Red Hat / Fedora, which was positioned well at a time when certain distros, notably Debian, used crufty SysV init scripts which they wanted to move on from. When Debian adopted systemd, after some debate, every Debian-based distro did too. This wasn't considered a particularly invasive change at the time. But systemd started to expand into its "middleware" position, absorbing vital system functions like udev. Distros which adopted systemd found themselves along for the ride, while distros that didn't increasingly found that software was starting to break, forcing them to maintain forks, or chisel out the piece of systemd that made it work and run it standalone. So was created a network effect that forced even more distros down the path of least resistance into systemd.
Meanwhile, amidst all the grumbling that this bait-and-switch was provoking, the systemd project leaders had an astonishingly bad attitude towards the maintainers of software that they were breaking, which fomented considerable personal animosity on top of the technical resentment. Not only were they driving a horse and cart through the standard ways of doing things, breaking a lot of stuff in the process, they were being rude about it.
So, more or less, people resent systemd because it's difficult to find - or build - a distro without it.
> When Debian adopted systemd, after some debate, every Debian-based distro did too. This wasn't considered a particularly invasive change at the time. But systemd started to expand into its "middleware" position, absorbing vital system functions like udev.
Your timing is off. udev became part of systemd in 2012. The big Debian debate was in 2013-2014, and was acrimonious enough to eventually cause several of the CTTE members to resign their position over it. One of the core issues driving the Debian systemd debate was the fact that GNOME was planning on dropping support for anything other than systemd-logind (or something like that), and there was definite concern that packages were already making decisions that were forcing everyone to adopt systemd as the only init system.
> Basically, it started out as an init system... But systemd started to expand into its "middleware" position, absorbing vital system functions like udev.
Not true. From the start it was positioned as much more than an init system. This is clear just skimming the blog post announcing systemd http://0pointer.de/blog/projects/systemd.html
Among other things, it talks about:
-deferring launch of some services until they are needed, to speed up boot
-launching additional services and shutting down others in response to hardware changes (e.g. adding a usb device).
-taking some or all socket management away from daemons so that sockets can be ready before daemons are live or daemons can be shut down when there is no activity
-“babysitting” services including through cgroup based process management
-managing (some of the) access to /home and other directories
It’s quite a broad scoped announcement. Toward the end there is a “short list of other features”, numbered, ending at 20. Under “Where is this going?” he writes, “ The feature set described above is certainly already comprehensive. However, we have a few more things on our plate.”
systemd clearly started out as a broad based solution, and reading through that post, looking at the problems it addresses, and considering that launchd and upstart tackled similarly many issues, it’s clear why it is so broad - the world has changed. In the 70s on a pdp 11 it was sufficient to launch some daemons at boot, keep an eye on getty, and call it a day. Maybe ~50 years later a new, more comprehensive system makes sense? When Linux runs laptops and phones that are endlessly reconfiguring themselves in response to network, Bluetooth, usb, when systems are sleeping and waking routinely, when vastly more services run on a single machine, when conserving battery is important, when boot time can be reduced to seconds if things are better orchestrated? “standard ways of doing things” have changed in every other sector of society, basically, why wouldn’t they finally change in OS service management?
Sure, but most of that introductory post was about service management, and mount management, which are two areas the init system was traditionally heavily involved in. Sure, the init system might not technically have concerned itself with either, but a lot of the ordering considerations were very heavily concerned with both of those.
Now, no one will argue that systemd's service management capabilities far exceed what most historical init systems did. Heck, the traditional init system had interior service management to window's Service Control Manager implementation, since the classic init could not for example give a list of running services. And systemd is far more flexible and feature rich than Window's options now.
systemd-boot and timedated on the other hand are pretty darn disconnected from anything a classic init system was concerned with. It does feel like a lot of the non service/mount management parts of systemd are basically a grab bag of random stuff.
> This wasn't considered a particularly invasive change at the time.
It was pretty invasive - I tried to keep using Debian without systemd, but indirect dependencies always brought it in. Until I switched to Devuan.
And the dependencies brought in EVERYTHING. You could no longer mix and match, pick one part from here if it worked for you, and another part from there.
Which lead to the criticism that "systemd is not modular". Which Poettering denied, pointing out that there are modules at systemd-compile time, completely missing the point.
> So, more or less, people resent systemd because it's difficult to find - or build - a distro without it.
And I wouldn't have minded systemd as an INIT SYSTEM - the existing scripts WERE pretty bad. But systemd is like the Borg - it tries to assimilate everything. And it broke quite a bit of existing functionaility along this path.
Red Hat produced systemd. Red Hat is the major contributor to Gnome. Gnome now refuses to run without systemd. Every major distro wants to carry Gnome, so...
Well, at least some distros offer a choice. Some other distros (Alpine, popular for containers, Void, which powers my laptop) live happily without it. Various shims exist, logind exists to allow running things like Xfce without systemd.
Is that true? The gentoo documentation says that you can chose your init system independently of your desktop manager, and puts openrc+gnome as an example:
Because most people like (or at least tolerate) it, but those people are not yapping their mouths about it all the time. IMHO systemd is actually one of the best improvements in the Linux ecosystem in the last 10 years.
Because you only hear the loud minority that's been complaining for the last decade. I've never had a problem with systemd, I remember the days before it became ubiquitous with absolute dread, but I don't spend my days proclaiming my views on every corner of the internet like the opposite side does.
It’s one of those polarizing things for which there is a small group of hyper vocal critics. When people discover something unexpected, they end up googling it and take a trip down the systemd rabbit hole.
The majority of people don’t give a hoot about it.
Running a distro is a ton of work. Writing your own one off init scripts per package is even more work.
The short answer is...Red Hat has money and threw it behind systemd. Like them or hate them, they are in many ways the deciding fate of Linux systems. So, other distros fall in line. Ubuntu is a contender, but gave in and followed suit too.
Void and Solus are both good distros that don't use systemd. I'd go as far as to call Solus the best out of the box distro ive ever used.
You're half right, but writing init scripts is by far the easiest part of not running systemd. As a sibling comment lightly puts it - "various shims exist".
Writing a few is super simple. But distros have tens of thousands of packages. Most people use it or a derivative of Ubuntu, RedHat, or Arch. All provide packages with systemd files. Heck, even most upstream packages are.
The parent mentions systemd-resolved and you mention systemd-homed. Most distros that use systemd don't use either by default.
IIRC the required parts of systemd are pretty much init-related, service-related (as in starting/stopping services based on certain conditions), udev and journald (which can and is often defaulted to forward to syslog).
Comments like yours make it seem like resolved, homed, nspawn, machinectl, networkd, cgtop, and all of the other utilites and optional daemons are required. They seem more like konquerour in your KDE analogy as in that they are nice if you want to buy into the whole ecosystem but by no means required to use the core program (and many seem to use alternatives instead).
I'm by no means an expert on this, so please correct me if I'm wrong.
I recently discovered that systemd supports drop-in config files so I'd recommend putting the changed settings into something like /etc/systemd/resolved.conf.d/disable-stub.conf. That way your package manager isn't upset that files it controls are modified.
Folks at Red Hat noticed that the Linux userland architecture is not convenient for what they would like to build. So they are building systems which replace the traditional mechanisms: first DBus, then systemd, which is an umbrella of many services: init + service monitoring + containers, logging, interactive login, some networking, etc. I suppose that in 10 years a linux kernel + systemd could be a complete system, with little need for other userland.
> Systemd is a replacement for Unix... I suppose that in 10 years a linux kernel + systemd could be a complete system, with little need for other userland.
'Doing many things, none of them well' is actually the opposite of UNIX.
This needs to be stated yet again apparently - systemd is not one monolothic binary, it's a collection of tools in one repository.
Centrally developing many disparate pieces of software from one repository is what every BSD does so you can hardly argue that it's not "UNIX".
This is setting aside that systemd actually does all of those things better than the tools that came before. journalctl and systemd timers are fantastic. And good luck using the alternatives to manage cgroups.
Sometimes you want a different balance — say, in embedded systems. If you think about busybox, it does something similar, only doing a better job at looking like the traditional userland.
There is nothing bad about it. What I find bad is forcing everyone to jump on the same bandwagon, no matter which bandwagon. If I wanted choices made for me, I'd buy a Mac.
Hm. GNU exists to unify software under a single philosophical umbrella - the thing that all GNU software has in common is a fanatical attitude to software freedom.
What does all systemd software have in common? What's the unifying principle there?
I think the best answer is that systemd-the-project tries to be the things that all of the different distros historically had to do themselves, the things that didn't have an upstream, but maybe deserved to have an upstream so that they could be reused by other distros, and maybe benefit from more eyes and more engineering effort.
Each distro used to have to roll their own initscript system (with varying degrees of compatibility), so they made `systemd`. Each distro had their own init script to save/load the backlight state, so they made `systemd-backlight` (if you're going to replace the initscript system, you'll also have to replace the things that the initscripts for that system did). Each distro used to have to roll their own network configuration scripts (Debian's /etc/network, Arch Linux's netcfg and later netctl, ...), so they made `systemd-networkd`.
Some of this class of thing got picked up by other projects; each distro used to have to roll its own initramfs system; today Dracut exists, so systemd won't be doing that (unless the Dracut folks decide to team up with the systemd folks--remember gummiboot? gummiboot renamed to systemd-boot).
In one of your other comments you wrote "in my opinion that implies more modularity than it posesses." While there are some nasty couplings (it'll take some patching to get systemd-nspawn to work on a non-systemd system), you can successfully use many of the systemd modules on an OpenRC system; it is a lot more modular than people give it credit for.
It wasn't quite the solution I was looking for. I wanted my DNS server to "just work" for people who downloaded it—I didn't want to force them to re-configure systemd to accommodate my software.
Tweaking my software to accommodate systemd.resolved took ~4 hours. It was an enjoyable dive, but I'd rather not have had to do it on the first place.
I continue to think it's a bit of an overreach for Fedora to include a DNS server by default.
I swear at this point there is not a soul that really understand the low level pieces of a Linux distro fit together.
systemd-resolved is a separate project from systemd whose only relation is that it’s in the same repo. It’s up to your distro builder to decide if they want to use it. And also up to your distro if they want to use the stub resolver. It’s not necessary for systemd-resolved to work. The reason they have the stub listener is because on Linux every application is supposed to use glibc nss to resolve names but there are real-world applications that try, incorrectly, to parse resolv.conf and do dns queries themselves. This breaks things if your assumption is that all dns queries are going through systemd-resolved. And this project isn’t the first. Ubuntu and others have used dnsmasq as the stub resolver for the same reason. Using just the dns nss module and resolv.conf is fundamentally broken and at this point unfixable when you are connected to multiple networks or with separate dns or split tunnel VPNing. You have to provide your own, smarter, nss module and/or use a stub resolver to cover these use-cases.
Because it's built by the same people that built systmed and uses some of the low-level internal libraries from systemd (input parsing, configuration, etc) ? This is not that difficult of a conclusion to get to honestly.
For people who use Arch, Artix is a fork that offers a choice of init systems. You still get the AUR and rolling release, but you can use runit (void's init system) instead of systemd.
FreeBSD, OpenBSD, NetBSD, and DragonFlyBSD are not the same operating systems, they don't share the same kernel. There are custom distributions of FreeBSD for specific use cases, such as OPNsense, but they are the same operating system and kernel as FreeBSD.
It’s actually quite an astute observation. I read recently that there are over 600 Linux distros and only about 500 are actively maintained.
> Web developers who use Google Analytics are stupid.
The adjective is wrong, but it’s also fair to say that there are free solutions for analytics, and some may go straight to using GA when they don’t need to.
> I read recently that there are over 600 Linux distros
How old are you? Everyone knows there are many Linux distros. Distrowatch has existed for almost 20 years. The issue isn't the number of distros; it's the marketshare. There are only a few distros with significant usage. The fact that there are many others is irrelevant because no one uses them.
The fragmented nature is what I love about Linux. In other words it's called "choice".
Because of the fragmentation, I don't have to use systemd, or pulseaudio, or snap, or Gnome desktop, or most other things I don't want to use for whatever reason. I can configure my box (or my container) mostly the way I see fit.
If you want ultimate unification, choose macOS. These guys are certain that they know what you need, and rid you of the need to choose. If you want a bit less strict unification but with backwards compatibility, choose Windows.
BSD sees less fragmentation mostly because it's less widely used. Though FreeBSD, OpenBSD, NetBSD, DragonflyBSD, and even Darwin exist, and differ significantly form each other. The BSD land, fortunately, also offers choice.
> Because of the fragmentation, I don't have to use systemd, or pulseaudio, or snap, or Gnome desktop, or most other things I don't want to use for whatever reason.
I think part of the very reason some folks like you don't like to use some of those things is their shortcomings... which likely come about in part due to the plethora of choices leading to fragmented efforts on the development side. People shift their attention to the newer/shinier things all the time, and nothing ends up becoming rock-solid to the point where you wouldn't even think about wanting to replace it.
To put this into perspective, consider how bizarre it would be to hear end-users complain "What if I want to replace the Windows Audio/Task Scheduler/etc. service with something else?" in Windows land.
"Linux is about choice" is the most toxic and cancerous idea in the Linux world.
Linux has always been about software freedom and open source, it was never about choice. When people made it about choice, infinite fragmentation started and there's no end in sight.
Though with their shortcomings, thankfully somebody is fighting against that concept, and trying to build a stable base for everybody.
I'm not sure how you can put "software freedom" and "never about choice" in the same sentence to be honest. Software freedom implies without a doubt the freedom of not using software, or that of using "different" software.
That makes absolutely zero sense. The whole point of being able to modify and redistribute is about being in control of the software running on your system.
I posit that freedom 0 of the free software definition can be interpreted to mean that one can not just not use a specific piece of software, but even make use of alternatives if they're better suited for one's purpose.
Pretending that freedom of choice about what people spend their free time on is "an unfortunate side effect", is what represents a "toxic" attitude in my opinion.
End users might not complain about that but most end users do not care about that. For those that care there are replacements like Process Explorer/Hacker.
And people have tried a LOT to replace stuff in Windows, it is just that Microsoft doesn't make it as easy as on Linux.
> "What if I want to replace the Windows Audio... service with something else?"
In Windows Vista, Microsoft rebuilt the audio stack on top of WASAPI, making MME and DirectSound shims which feed into WASAPI. However, they didn't break compatibility, and all APIs more-or-less work nowadays. On Linux, ALSA apps sometimes have trouble picking the right device when talking to PulseAudio, and it's difficult for PulseAudio and Jack to share a single device.
Ten year old FUD. Most distros are on systemd now. Whatever you want to say about systemd, it's definitely not fragmentation. Also there's the freedesktop project and dbus which creates a standard API for desktop applications. All Linux distros use glibc and GNU coreutils. Where's the fragmentation?
Linux desktop applications are all very fragmented and broken (GNOME, KDE, 100 different tiling window managers, 100 different music players that all suck), but that's not Linux, that's userspace.
> Torvalds has a bad opinion on ZFS
Doesn't matter. ZFS on Linux still works. Also ZFS is not immune to criticism. BTRFS is getting better and is more than adequate for everyone who is not running a CDN. Also ZFS isn't exactly usable on all BSDs, just FreeBSD (nor did FreeBSD come up with ZFS in the first place (why not migrate everything to Solaris then?)).
> Linux is being heavily influenced by corporate interests
Good. Problem? The computer is a tool not a political bumper sticker. I'm happy to use something for free that developers are paid to improve and that I still have complete control over. Linux [Android] is the most widely deployed OS in the world. It's constantly improving and lots of eyes are on the code. There just aren't enough skilled volunteers to maintain a giant codebase as critical as an OS.
When BSD is used by corporations, they fork it and make their improvements proprietary.
> Time to migrate everything to BSD
Then almost everything I do with my computer would be impossible except for dicking around in a web browser. How do I run containers, modern hardware, or CUDA? Hard mode: No Linux emulation.
> Linux desktop applications are all very fragmented and broken (GNOME, KDE, 100 different tiling window managers, 100 different music players that all suck), but that's not Linux, that's userspace.
See, I get where you are coming from, and I use Linux myself, but I feel that the author sees this as the fragmentation problem. One of the author's points is that the division between userspace and kernel is a problem, and the *BSDs offer up full operating systems, not just kernels.
From TFA: "Linus Torvalds has many times made it very clear that he doesn't care about what goes on in the "Linux world", all he cares about is the kernel development"
Otherwise, yeah... BSD is a great choice for some stuff, but it certainly falls short of the tools available to a Linux environment.
I've been using Linux as a daily driver on laptops for about the last 12 years. I mostly do backend service development, having code that ultimately runs on a Linux server, so it's been quite easy for me to adopt.
I would have installed BSD on one of my primary dev laptops by now, but there's one huge software compatibility missing for me:
https://wiki.freebsd.org/Docker
Docker's currently broken.
It's not that I need something-like-Docker, which BSD absolutely has. It's that I need straight-up-Docker. I also write a lot of books, the last one having the reader run applications in Docker and run various services from Docker, so it's important that I'm doing something on my Linux dev laptop that isn't too different than my readers who seem to be mostly on MacOS.
You can use Bhyve and run Docker in a Linux virtual machine. Docker works similarly under MacOS using the Hypervisor Framework. FreeBSD would need to radically change in order to implement Docker natively.
I'm not sure the default jail utility is quite as flexible as what Linux namespaces+cgroups can do.
It does look like most of what really matters does exist in some form, and I'd guess any important cases that don't exist could be fixed with a few new simple sysctl options.
However, BSD's do not guarantee that their userlands will work with a mismatched kernel. Sure, it often does work, hence why jails only give a warning on mismatch rather than refuse to run at all.
Also containers are most useful when most containers you want are actually available for your platform. Unless you use the Linux Emulation features, I'd suspect that relatively few containers would be mode available to run on FreeBSD. And the problem with Linux personality systems is that while many programs will work fine with them, there will always be some Linux syscalls that are unimplemented, or have important limitations/differences. So while many programs may work, some will not work. Even if the system call is fully supported, not all use cases will be. For example, not every file system Linux supports will be mountable, and programs could be using loopback mounts that need such support for weird reasons (I'd bet has made an app-bundle system for linux that relies on loopback mounting ext4).
There is a reason why Microsoft abandoned the personality like implementation of WSL1 in favor of virtualization for WSL2. Now admittedly, things are not nearly as bad on FreeBSD, since implementing one Unix-like personality in a different Unix is going to be easier and work better than trying to implement it on a decidedly non-Unix kernel. But even so, there will always be some programs (however obscure) that won't work right, while virtualization can largely avoid that. (Albeit with new limitations like not being able to easily access host hardware).
Noth of the above is at all a dealbreaker. containerd and docker support windows containers which have many of the above mentioned concerns, and many additional ones like the restrictions on distributing the windows base images.
What really needs to happen for jail support for docker is to come up with the FreeBSD specific options for the OCI spec, implement an OCI runtime based on jails, add support for setting the OS specific options in containerd, and implement needed network support in dockerd. (containerd leaves networking setup to its caller, as docker has different opinions than kubernetes for example).
The containerd people will almost certainly not object to the needed patches. If I had to guess, the docker maintainer's big concerns over a moby patch will be the overhead of supporting the needed patches (since FreeBSD will rightfully be seen as far more niche than Linux), and that the end-user experience of various docker command lines work more or less as users expect. (I.e. not more different from docker-on-linux than docker-for-windows-containers is). None of this is at all insurmountable.
> However, BSD's do not guarantee that their userlands will work with a mismatched kernel. Sure, it often does work, hence why jails only give a warning on mismatch rather than refuse to run at all.
FWIW, FreeBSD tends to go to pretty great lengths to ensure newer kernel with older userland works. A stock GENERIC kernel comes with COMPAT_FREEBSD* options back to COMPAT_FREEBSD4, and parts of the project's infrastructure tend to explicitly rely on at least supported releases to be functional in a jail on a -CURRENT kernel.
Interesting, and good to hear. I know the other BSDs have a very different view of things. I had heard that Linux was the only OS with a stable kernel ABI guarantee. If FreeBSD does too, that certainly is better.
Windows for example makes zero guarantees there. There are a lot of syscalls that they won't renumber because some applications have taken a dependency on using them directly, but officially using a syscall without going through NTDLL (or wherever the stub is located for private syscalls) is unsupported. Those syscalls they are not keeping fixed for compatibility can and do change from version to version. Mostly in numbering, but changes to semantics or arguments can happen too. Hence Windows Containers can only run in separate namespaces on a matching kernel version, and the hyper-v isolation (a.k.a. virtualization) option for containers is needed for mismatched versions.
So creating an OCI runtime that wraps jails, adding any needed support for FreeBSD specific OCI container settings to containerd, and adding the needed code for things like networking to moby/moby (a.k.a. docker) sounds very feasible to me if some FreeBSD hacker wanted to get proper docker support. Offering Linux Emulation as an experimental option top be able to run more containers would be an added bonus, and should be feasible, since they once had that working with their old unofficial (presumably pre-containerd) builds of docker.
I recently moved my home systems from Ubuntu to Debian because Ubuntu dropped 32-bit support (I have a mix of 64 and 32-bit and wanted to keep them on the same OS). The installer didn't work on my oldest laptop from 2003 with a new mSATA SSD IDE adapter (it hung when formatting the SSD). In the process of debugging the problem I tried FreeBSD for the first time, which not only installed successfully but even supported the wireless adapter. I'm running LXDE as I'm doing with the other machines. The learning curve was far less than I expected, so hats off to the FreeBSD project!
Ok, but then if everyone switches to BSD, won't it suffer the same fate? I.e. whenever something goes mainstream, corporations are drawn to it like insects to a lightbulb.
Companies are free to do whatever they want with BSD and as such they don't need to try to affect the way things are going. If that wasn't the case we would possible see, for example, Sony trying very hard to influence the development of FreeBSD because they use that in their PlayStation products.
And yet they do anyway. Netflix contributed network code to FreeBSD. In so doing, they took code they had developed and handed over the maintenance to the FreeBSD team.
I think this is the biggest incentive companies have for making contributions to open source: they don't want to pay engineers to keep maintaining their own patches forever.
I didn't understand that argument at all. The GPL mandates that companies release their code. That's it. There's no rule that the code has to be upstreamed or that the companies need to try to make other projects use it. To the extent such behavior is incentivized, it should be incentivized just as much with more permissively-licensed code.
I always understood the argument to be that for rapidly changing projects you might be forced to upstream your GPL code or face an uncomfortably high maintenance burden of constantly rebasing it. Linux almost purposefully does this, it's kind of like a "soft" vendor lock in. Linux kernel interfaces change regularly, and if you don't upstream your code you need to commit time to fixing what breaks. If you upstream, then someone else will handle updating your code for you for free. The more regularly they churn the internal interfaces the more it costs to maintain external patches and the more appealing upstreaming becomes.
The end result is lots of companies jamming their code upstream because it is cheaper (or they are soft-required) to do so, not because it is good for the project. The benefit is you have large corporate contributions, with the possible downside that the companies want to take your software in a different direction than you do (i.e. Microsoft embracing/extending).
The BSD license doesn't try to strongarm people into contributing, and the BSD projects are open to being a base for other projects. This is bad because companies may not contribute back to you. It is good because it means that you don't have 20 companies pressuring you do make decisions. They've gone in their 20 directions and they want you to continue to be a solid base for them to build on.
I think a lot of people (me included) like the feel of this way more. When you read through FreeBSD you don't see the legacy code from a bunch of corporations mixed in, you just see the good stuff. People contribute because they want to even though it's more work, versus reluctantly contributing because it's less work. Not saying permissive licenses are perfect and don't have their own problems, just explaining why they might have different advantages.
But imagine if Linux used the BSD license and FreeBSD used the GPL license. Absent any other philosophical changes to how each project was managed, I don't think the incentives to upstream projects would be any different. Sony would dump a bunch of code on their website somewhere (which would be excellent, because others could study and use it as desired), and little else would change.
You know, this happened before. Back when Unix and BSD were new. The end result was AIX, HP-UX, Irix, Solaris, SunOS and a bunch more. Oh, and the way that there was no good response to Windows NT.
Linux is very, very much more coherent than that chaos.
this is not an argument, this idea that non mainstream is better is harmful , you could and should use bsd over Linux for real argument like loving the idea of "monolithic os", the problem whit the Linux foundation (one or all), or the problem of abundance nesting in most of Linux but it inst about being mainstream, they still exist project whit difference approaches some less orthodox, or non corporation, i think Linux have more robust ecosystems, the thinks works out of the box is a and approach who let people let in the ecosystems like corporations, example if you don't want systemd make a distro(share it please) or use one of the already existing once LTS or rolling realizes approach.
MacOS is not based on BSD, it's an old legend. NeXTStep was designed to be "compatible" with 4.3BSD (the diagram on Wikipedia is wrong because of this confusion).
Edit : source about compatibility on the last page : http://www.nextcomputers.org/NeXTfiles/Docs/Developer/NeXT_D...
Many Mac OSX userspace utilities were forked from FreeBSD. It's true that the OS as a whole is not really a FreeBSD derivative, but there definitely is some shared code.
Snaps, flatpak, and similar containerized package distribution are all trying to build an app store to monetize software distribution. And an attempt to get a software distribution method gain popularity across linux distribution lines.
While I will never use these methods, and I think they are completely flawed ideas, I think the goal of them is a natural evolution of fragmented similar systems.
The BSDs have the same problem as linux distributions. The open source BSDs share a lot of code but aren't quite the same. If several of them got very popular, someone would attempt to make a method to distribute software across all of them. Though in BSD this is harder because in linux land at least the kernel is standardized.
This post is mostly Linux FUD. I don’t understand what’s the use of this.
Actually I always found it funny and a little brave that Linus called ZFS a buzzword. In my opinion he’s right. ZFS breaks a lot of use cases but unless you know what you’re talking about you’re going to think it’s the end-all. I slightly prefer the MD-RAID system.
I didn't get the ZFS complaint either. Countless years of development went into OOP. Lots of great software has been written with it. It's still a buzzword.
To expand, I think the big issue was any time you searched your launch menu for your own machine's applications and files, the query was also sent to Amazon.
From the article, it sounds like it's that upstream systemd uses Google Public DNS as a default (if you don't configure a server or get one in your DHCP lease) and Firefox uses Cloudflare's DNS-over-HTTPS, which would allow those companies to see all your DNS queries.
option('dns-servers', type : 'string',
description : 'space-separated list of default DNS servers',
value : '1.1.1.1 8.8.8.8 1.0.0.1 8.8.4.4 2606:4700:4700::1111 2001:4860:4860::8888 2606:4700:4700::1001 2001:4860:4860::8844')
option('ntp-servers', type : 'string',
description : 'space-separated list of default NTP servers',
value : 'time1.google.com time2.google.com time3.google.com time4.google.com')
Seems like Cloudflare primary then Google primary (followed by the secondary records in the same order) is the preferred default order for both IPv4 and IPv6.
I think the objection is the type of company that sees your requests.
Traditionally, it's your ISP who gets to see your and all their users' DNS lookups.
But now it's Google (on top of your ISP) and Cloudflare (on top of with 1.1.1.1 and instead of your ISP for DNS-over-HTTPS), and the claim is that they are going to misuse the data (well, it's pretty much a fact for Google).
I am generally not in favour of any siloing and centralisation of that scale, but if you want private DNS, your options are quite limited.
I also wonder if it'd make sense to bundle a simple DNS-over-Tor service or would that be easy to track? I'd run it on my openwrt router.
You might not consider that "quite limited", but that is likely because of a different interpretation of "private DNS".
Private communication is something that only the two (or more) parties communicating are privy to.
With HTTPS, the risk is reduced to CA compromise. With DoH, the risk is the company running the service on top of the CA compromise.
The parties communicating are the root/TLD name servers and me. Private DNS is DNS where nobody sees any of my DNS traffic, except for the root resolvers (which thus become the target of potential privacy breach).
Any intermediary means that they can see your data, but if they are centralized in only a few places, it's a bit beside the point. But then again, if they are so small that only a handful people use them, your traffic will be simple to filter out.
Finally, how do I set up my system to use any of these half-solutions for all DNS requests today?
I'd still prefer a DNS-over-Tor solution if anyone came up with it.
There's just not enough in BSD for desktops. Ultimately, BSD solves package manager fragmentation (kinda, although there are now four BSDs) but not graphics layer fragmentation, and nobody's investing their time in half of a solution. Sure, "it's for servers [and powerusers]", but the barrier to entry is always lower for OSs you can use on the desktop and the narrative that most developers like tinkering with their config files has by now been conclusively disproven.
As for graphics layer fragmentation, it seems like Wayland makes it worse by handing more of the graphics stack over to window managers. But I guess it's also an opportunity for a BSD to make their own 'blessed' compositor (although this is extremely unlikely IMHO, for graphics driver reasons if nothing else).
In terms of the privacy and corporate interests concerns, at least, it seems like a lot of this article consists of systemd hate. So use a non systemd Linux distro. Simple! https://nosystemd.org/ can help you choose one. (I know, pretty much all the major distros have fallen like dominoes, but there are still reasonably mainstream choices out there).
There's almost no mention of such issues in the Linux kernel itself, except for "the kernel forcing adaption of DRM". I'm not versed in that story, feel free to enlighten me. But from following the link to the kernel.org thread, it seems fairly clear that the feature in question is opt-in, and that the Linux kernel therefore isn't forcing DRM onto anyone.
> DNS over HTTPS is by itself bad enough, and highly criticized with good reason
The criticisms of DoH seem to be around removing the ability of third parties to snoop on your DNS queries. Am I missing something here?
Like, yes, some of the use cases outlined in the linked wiki article [0] are arguably legitimate (parental controls, cybersecurity identification of C&C nodes), but then we quickly move into murkier territory (ISP blockages due to government mandates).
Preventing MitM attacks is an explicit design goal of DoH, not a side effect.
DoH on-by-default means that I as a sysadmin -- for my family, enterprise, or even just my own computer -- no longer have the ability to easily specify how DNS works for the systems under my control. Maybe I run a PiHole; maybe I run a parental control DNS server; maybe I'm better at privacy protection than Mozilla and Cloudflare are.
Tough luck; now I'm no longer able to configure all my systems to use my DNS server over DHCP. I can't even manually set the global DNS server for one computer any more. Now I have to manually set preferences on every instance of Firefox (at least to verify that DoH is truly off), and every other application that adopts DoH one at a time.
The goal of DoH for preventing snooping by ISPs is fine. It's the implementation with hardwired per-application resolver hosts that's bad.
Edit: There's a way to tell Firefox to disable DoH at the network level [0] but AFAIK this is just a Mozilla thing; it's not an RFC. Will other browsers and apps adopt this mechanism? Who knows?
Tell me where I'm wrong or misunderstanding here: If you consider DNS analogous to HTTP, then DoH is TLS and the DNS provider is the CA (the org you need to trust). Your objection is that you can't modify protocol traffic since there is now an authority out of your control. It seems to me like it increases the security of the user since the network operator can no longer modify the traffic that the user requested.
I get that it makes some use-cases harder, just like HTTPS made some HTTP use-cases harder (like captive portals for wifi login) but in the end it seems like a net benefit for the majority of usecases (I want the same site, securely, no matter the network I'm on), right?
The DNS provider is not analogous to the CA. Remember that the CA only certifies that Cloudflare is in fact the Cloudflare, and not Joe's All-Night CDN and Lotto Ticket Dealer. The CA does not vouch for the ethics of Cloudflare, just like the CA does not vouch for the ethics of Google. So that's point 1.
Point 2 is that if I for some reason don't trust Cloudflare to protect my users' privacy or provide speedy service or hey, maybe I just don't like the fact that they carry traffic from a bunch of seditionists -- whatever -- then I have no easy way to say to Firefox: Don't Use Cloudflare, use my provider who I do trust.
I don't care what sites my users are visiting, so inspecting their DNS traffic is not something I need to do (although perhaps it is for certain enterprises). The central point is that I no longer get to easily choose which DNS provider to trust; Mozilla has made that choice for me. That breaks the internet in a way analogous to the way AMP breaks it, and that's wrong.
I can in fact change that choice but doing so introduces a new protocol and a new set of labor-intensive tasks for the sysadmin that did not exist before, and for every app that adopts the Mozilla model, that labor increases yet more.
It's horrible for environments with split horizon DNS. It presumes that the only network that should exist are home users consuming public Internet cloud services.
For privacy it's a discussion of do I trust my obnoxious non-US ISP or a US based .com with seeing all my browsing habits based on DNS queries. At least I could have legal recourse with my ISP in my own country, and there is slightly better privacy laws.
It was a fine simple solution until DoH. In some internal environments the internal traffic volume can be much higher than the few services that might be publicly exposed.
Sure lots of ways you could do it - get a fat edge firewall to hairpin the traffic + support Internet access but you end up paying a lot more for all the threat licenses on the oversized edge. Could add many more tiers, maybe more translations or overlays... but why bother with a lot more complexity or especially more cost just because someone saw a threat in another country and are trying to solve a problem that does not apply to most.
Further more there can be internal only host names that are now getting probed and exposed externally. Exfiltration to a US company in the name of "security"
Hiding a DNS entry doesn't do anything to hide the machine. How long do you think a newly exposed IP address, without a DNS entry, will last before it is probed?
I just hate that it breaks Split Horizon which is totally legitimate. Leaking local-to-the-LAN queries onto the public internet is also a concern.
IMO the right way to do this is to run your own local DNS recursive resolver, and deploy DNSCurve. This doesn't work for myopic browser vendors though...
I get that breaking split horizon is a real concern, but also I would expect any enterprise deployment using internal DNS resolvers should be able to set up its users' Firefox config to either a) point at an internal DoH server, or b) turn it off?
On one hand, I see the arguments against DoH-by-default -- loss of control, potential for unfixable poor / buggy implementations, requiring trust of third parties... (The trust problem isn't going to be solved any time soon, though.)
It would be nice if implementations just used the discovered DNS server across the board, but DNS servers don't support DoH across the board yet.
On the flip side, I don't see any widespread shifts in this space until a large enough player (in this case, Mozilla) decides to put its thumb on the scale to "encourage" wider adoption. While centralized DoH may be the only game in town today, I would argue that's due to lack of adoption, and I would hope that's not the desired end state.
If you want to prevent MitM attacks, run a local DNS server with DNSSEC. DoH is a trend in the wrong direction. The more IoT garbage begins using DoH, the less control over your privacy you will have. Solutions like pi-hole are rendered useless by DoH.
Since virtually none of the most popular domains are signed, going to the trouble of installing a local DNSSEC server isn't actually going to protect you from anything. Meanwhile: if you're in North America, your ISP is almost certainly collecting and warehousing your DNS queries, and DoH immediately breaks that. Plenty of people use DoH through Pi-holes, for what it's worth.
Not all MitMs inject malicious responses; denial of service is equally problematic. If you can block DNS requests via a pihole, so can your upstream ISP (whether it's because they want to throttle your internet usage, or police it, or whatever else).
And even just a passive observer snooping on your DNS requests can result in an invasion of privacy.
I agree on all counts, but DoH removes control and choices that I want to keep. pi-hole itself supports DoH. It's a good way to protect privacy and keep control.
Fix Realtec Ethernet Driver support?(In response to the title). I have tried to use FreeBSD many times at work, but due to the driver support I cannot. It hangs and fails super often when using a Realtek device. I got to look like a dick during a board meeting trying to show off our throughput when the issue manifested itself for the third time.
This article insight is GPL 'virality' forces commercial developers to modify and maintain
Linux components as open-source which results in Linux being developed for the commercial developers benefit, vs BSD license allowing commercial developers to keep their modifications to themselves. The latter prevents companies from directing/pressuring OS development and removes incentives to meddle in design - while it does lose 'progress' it retains more freedom in choosing the direction of progress.
The licensing agreements of BSD make it more likely to be taken over by corporations, not less. Copyleft is a feature of Linux, not a bug. From a practical level, BSDs have also way less driver and software support than Linuxes. Outside of FreeBSDs network stack being better than the Linux stack, I can't see a reason to use BSD over Linux unless you want full proprietary control over your platform, which most times you don't need to make lots of money, at least to that degree.
I’m doing this right now. Linux has gotten too large, bloated, corporate. It no longer feels like a community project and there are quite a few recent move that make me question whether or not it will be viable down the line.
I've already moved the whole office. Even have the one proprietary program that we need, Mathematica, running in Linux VMs on OpenBSD and displaying without a hitch thanks to x.org.
This is emotion-driven drivel.