> A micro-kernel with a clearly defined device driver API would mean that Google could update the kernel and android version, while continuing to let old device drivers work without update.
A monolithic kernel with a clearly defined device driver API would do the same thing. Linux is explicitly not that, of course. Maintaining backwards-compatibility in an API is a non-trivial amount of work regardless of whether the boundary is a network connection, IPC, or function call.
> A monolithic kernel with a clearly defined device driver API would do the same thing.
Maybe, but I doubt it. History has shown pretty clearly that driver authors will write code that takes advantage of its privilege state in a monolithic kernel to bypass the constraints of the driver API. Companies will do this to kludge around the GPL, to make their Linux driver look more like the Windows driver, because they were lazy and it was easier than doing it right, and for any number of other reasons. The results include the drivers failing if you look at the rest of the system funny and making the entire system wildly insecure.
If you want to a driver not subject to competent code review abide by the terms of the box in which it lives, then the system needs to strictly enforce the box. Relying on a header file with limited contents will not do the job.
> driver authors will write code that takes advantage of its privilege state in a monolithic kernel to bypass the constraints of the driver API.
Well, your job is shipping the driver. If the API is limited and/or your existing drivers in Windows or other OSs do something and the linux driver doesn't then you have a problem
It's still possible that drivers might be so buggy that a newer OS version might interact with them in a slightly different way which is still legal by the API definition but it still makes them crash or stop working.
That can be treated as an OS bug that’s fixed by updating the kernel to the latest version that fixes compat with that driver, which you can do because the driver remains unchanged. With Linux, even with DKMS, you’d need to backport your fixes to that old kernel in addition to maintaining the latest kernel version. And on mobile DKMS is not a thing.
I believe you might be limited by your imagination just how bad out of tree third party drivers can be. We're talking about hardware with DMA access here. It's trivial to create a situation that can not be fixed in the kernel (without breaking other things).
If you are running some generic PC hardware to write this comment, chances are that at least one tiny part somewhere which is dependent on some specific obscure timing to come up properly which just happens to work because someone inserted a small delay somewhere.
I don't disagree with that and yet somehow Windows manages to largely maintain HW compat across major OS upgrades with the same crappy third party driver ecosystem.
Microsoft is exactly the kind of vendor that will be inserting those small delays in weird places to keep a popular-but-broken device working better. Their focus on backwards compatibility has historically been very strong, and at the same time that has often lead them to not even attempt to clean up their platform. NT came to be largely because the legacy mess got too hard to deal with.
They generally tend to work for a very long time, like for 5 years at least. That’s literally the reason Windows has a stable ABI. Linux generally breaks drivers every 6 months or so.
It doesn't? Try loading a driver written for an old version of Windows. Third party drivers generally won't work for the next version of Windows. I have a printer that had a Windows 7 driver. It won't load on any more recent version than that.
The same printer works with Linux the same it has always done. But that's not Windows' fault or to Linux's credit. It's just the result of a really crappy vendor. However the Linux API is generally more stable than people give it credit for, it's just that it's the C headers only. Anything compiled for a kernel generally won't work anywhere else, and that's not what the crappy vendors wants.
In the embedded space, including most mobile phone vendors which seemed to be an important use case for Fuchsia, even reputable vendors will generally give you an image of an operating system heavily modified to work with their hardware. That's their "driver". Imagine buying a new PC and receiving a DVD with a Windows modified to work on that hardware only. Of course you can't upgrade or even patch security issues beyond what the vendor will give you! You're supposed to buy new hardware. Sure, you could extract the drivers and try to install them on a vanilla Windows, and that's exactly what projects like LineageOS do, but most users won't bother.
That's the situation with phones. It's not at all clear why Fuchsia thinks they could solve this. It's a cultural and an economical problem and can't be fixed with software alone. Why would phone manufacturer care about your microkernel architecture? They will just patch the whole operating system, binaries and everything, until it boots enough to start the GUI and ship that. Just like they always have.
The only thing that could improve this situation is by enforcing the GPL, or have similar contractual stipulations like only being allowed to ship a reference implementation unmodified, but Google shows no interest in doing that. They care about getting Android on as many devices as possible with no regard to their respective quality or product longevity.
You do have a valid point that sufficiently boxing off the drivers can force them to use an API of your choosing. Even the smallest of hurdles against doing it the "wrong way" can help because many drivers are written by inexperienced teams with a tight schedule.
However, if you discover that the box was insufficient at any point, you have to choose between changing the box (and breaking some perfectly good drivers), or leaving the insufficient box in place. API versioning can let you delay this decision to reduce pain, but it will happen at some point.
FWIW, I'm hugely in favor of microkernels, but they are a lead bullet (which we need lots of), not a silver bullet for these sorts of problems.
I would love to have an open source microkernel OS that works as well as Linux on modern hardware even if the API wasn't stable. I am making assumptions that you could have ZFS and secure boot at the same time without jumping through hoops, containerization without needing fictitious UIDs for every user, and other things of that nature. The monolithic kernel is very frustrating with some things.
The hoops that ZFS has to go through are twofold: licensing and unstable internal API. I don't see how a microkernel (on its own) fixes either of those things.
A monolithic kernel with a clearly defined device driver API would do the same thing. Linux is explicitly not that, of course. Maintaining backwards-compatibility in an API is a non-trivial amount of work regardless of whether the boundary is a network connection, IPC, or function call.