Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Your arguments seem to be around device tree support rather than the actual cores and arch.

That’s largely where the holdup is. Most arm devices use a variety of more unique supplementary hardware that often only distribute their support in binary blobs. So due to lack of ubiquity, the support in distros varies.

If you could skip the rest of the device and focus on the processor itself, the distros would largely all run as long as they didn’t remove support explicitly.

This is the same process as on x86. It’s just that the hardware vendors are also interested in selling the components by themselves, and therefore have a vested interest in adding support to the Linux kernel.

It’s very much the case that when new hardware comes out that you need a new kernel version to support it properly. That is true of processors, GPus and even motherboards. They don’t just magically function, a lot of work goes into submitting the patches into the kernel prior to their availability.

Since arm manufacturers right now have no interest in that market, they don’t do the same legwork. They could. If Intel or AMD entered the fray it would definitely change the makeup.

The one other big issue is there’s no standard BIOS system for arm. But again, it’s just down to the hardware manufacturers having no interest as you’re not going to be switching out cores on their devices.



Device Trees also don't magically cause incompatibilities either. They're just a declarative specification of the non-discoverable hardware that exists. The old BIOS systems are so much worse than DT and harder to test properly.


Yeah great point. I think people just take today’s state of things for granted and don’t realize how much has been going on behind the scenes to enable it today. And it’s still not great.


DTs are part of the problem.

It is one of those "devil in the details" kinds. In theory, DT would be okay, but it's not. The issue starts with HW vendors failing to create 100% (backwards) compatible hardware. For example, if they need a uart, they fail to hook up the standard baud rate divisor and instead use a clock controller somewhere else in the machine because its already there. So now DT describes this relationship, and someone needs to go hack up the uart driver to understand it needs to twiddle a clock controller rather than use the standard registers. Then, of course, it needs to be powered up/down, but DT doesn't have a standard way to provide an ACPI-like method for that functionality. So now it either ends up describing part of the voltage regulation/distribution network, or it needs a custom mailbox driver to talk to the firmware to power the device on/off. Again, this requires kernel changes. And that is just an example of a uart, it gets worse the more complex the device is.

On x86, step one is hardware compatibility, so nothing usually needs to be changed in the kernel for the machine to understand how to setup an interrupt controller/uart/whatever. The PC also went through the plug and play (PnP) revolution in the 1990's and generally continues to utilize self-describing busses (pci, usb) or at least make things that aren't inherently self-describing look that way. Ex: intel making the memory controller look like a pci root complex integrated endpoint, which is crazy but solves many software detection/configuration issues.

Second, the UEFI specification effectively mandates that all the hardware is released to the OS in a configured/working manner. This avoids problems where Linux needs to install device-specific firmware for things like USB controllers/whatever because there is already working firmware, and unless Linux wants to replace it, all the HW will generally work as is. Arm UEFIs frequently fail at this, particularly uboot ones, which only configure enough hardware to load grub/etc, then the kernel shows up and has to reset/load firmware/etc as though the device were just cold powered on.

Thirdly, ACPI provides a standard power management abstraction that scales from old pentium from the 1990s where it is just traping to SMM, to the latest servers and laptops with dedicated power management microcontrollers, removing all the clock/regulator/phy/GPIO/I2C/SPI/etc logic from the kernel, which are the kinds of things that change not only from SoC to Soc but board to board or board revision to board revision. So, it cuts out loads and loads of cruft that need kernel drivers just to boot. Nothing stops amd/intel from adding drivers for this stuff, but it is simply unnecessary to boot and utilize the platform. While on arm, its pretty much mandated with DT because the firmware->OS layers are all over the place and are different with every single arm machine that isn't a server.

So, the fact that someone can hack a DT and some drivers allows the hardware vendors to shrug and continue as usual. If they were told, "Sorry, your HW isn't Linux compatible," they would quickly clean up their act. And why put in any effort, random people will fund ashai like efforts to reverse engineer it and make it work. Zero effort on apples part, and they get Linux support.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: