It is a great achievement. But the question is: Is it really relevant? Couldn't they move the compatibility for larger parts to a VM or other independent Subsystem?
Of course even that isn't trivial, as one wants to share filesystem access (while I can imagine some overlay limiting access), might need COM and access to devices ... but I would assume they could push that a lot more actively. If they decided which GUI framework to focus on.
> Couldn't they move the compatibility for larger parts to a VM or other independent Subsystem?
A huge amount of the compatibility stuff is already moved out into separate code that isn't loaded unless needed.
The problem too, though, is users don't want independent subsystems -- they want their OS to operate as a singular environment. Raymond Chen has mentioned this a few times on his blog when this sort of thing comes up.
Backwards compatibility also really isn't the issue that people seem to think it is.
Independent subsystems need not be independent subsystems that the user must manage manually.
The k8s / containers world on Linux ... approaches ... this. Right now that's still somewhat manual, but the idea that a given application might fire off with the environment it needs without layering the rest of the system with those compatibility requirements, and also, incidentally, sandboxing those apps from the rest of the system (specific interactions excepted) would permit both forward advance and backwards compatibility.
A friend working at a virtualisation start-up back in the aughts told of one of the founders who'd worked for the guy who'd created BCPL, the programming language which preceded B, and later C. Turns out that when automotive engineers were starting to look into automated automobile controls, in the 1970s, C was considered too heavy-weight, and the systems were imploemented in BCPL. Some forty years later, the systems were still running, in BCPL, over multiple levels of emulation (at least two, possibly more, as I heard it). And, of course, faster than in the original bare-metal implementations.
Emulation/virtualisation is actually a pretty good compatibility solution.
Users don't want sandboxing! It's frustrating enough on iOS and Android. They want to be able to cut and paste, have all their files in one place, open files in multiple applications at the same time, have plugins, etc.
Having compatibility requirements is almost the definition of an operating system.
If you bundle every application with basically the entire OS needed to run them then what exactly have you created?
There are a relatively limited set of high-value target platforms: MS DOS (still in some use), Win95, WinNT and successor versions. Perhaps additionally a few Linux or BSD variants.
Note that it's possible to share some of that infrastructure by various mechanisms (e.g., union mounts, presumably read-only), so that even where you want apps sandboxed from one another, they can share OS-level resources (kernel, drivers, libraries).
At a user level, sandboxing presumes some shared file space, as in "My Files", or shared download, or other spaces.
Drag-and-drop through the GUI itself would tend to be independent of file-based access, I'd hope.
What is gained by this? What would you get by virtualizing a WinAPI environment for app in Windows? (MS DOS compatibility is already gone from Windows). You get a whole bunch of indirection and solve a problem that doesn't exist.
Obvious obvious advantage is obviously obvious: the ability to run software which is either obsolete, incompatible with your present system, or simply not trusted.
In my own case, I'd find benefits to spinning up, say, qemu running FreeDOS, WinNT, or various Unixen. Total overhead is low, and I get access to ancient software or data formats. Most offer shared data access through drive mapping, networking, Samba shares, etc.
That's not what I'd suggested above as an integrated solution, but could easily become part of the foundation for something along those lines. Again, Kubernetes or other jail-based solutions would work where you need a different flavour that's compatible with your host OS kernel. Where different kernels or host architectures are needed, you'll want more comprehensive virtualisation.
As long as you ensure compatibility then software doesn't have to be obsolete or incompatible. The Windows API is so stable that it's the most stable API available for Linux.
I can already run VMs and that seems like a more total solution. To have an integrated solution you would need cooperation that you can't get from obsolete systems. I can run Windows XP in a VM. But if I want to run a virtualized Windows XP application seamlessly integrated into my desktop then I'm going need a Windows XP that is built to do that.
- Fundamental prerequisites cannot be changed or abandoned, even where they impose limitations on the overall platform.
- System complexity increases, as multiple fixed points must be maintained, regressions checked, and where those points introduce security issues, inevitable weaknesses entailed.
- Running software which presumed non-networked hosts, or a far friendlier network, tend to play poorly in today's word. Well over a decade ago, a co-worker who'd spun up a Windows VM to run Windows Explorer for some corporate intranet site or another noted that the VM was corrupted within the five minutes or so it was live within the corporate LAN. At least it was a VM (and from a static disk image). Jails and VMs isolate such components and tune exposure amongst them.
What you and I can, will, and do actually do, which is to spin up VMs as we need them for specific tasks, is viable for a minuscule set of people, most of whom lack fundamental literacy let alone advanced technical computer competency.
The reason for making such capabilities automated within the host OS is so that those people can have access to the software, systems, and/or data they need, without needing to think about, or even be aware of how or that it's being implemented.
I've commented and posted about the competency of the average person as regards computers and literacy. It's much lower than you're likely to have realised:
And no, I'm not deriding those who don't know. I've come to accept them as part of the technological landscape. A part I really wish weren't so inept, but wishing won't change it. At the same time, the MVU imposes costs on the small, though highly capable, set of much more adept technologists.
I think that VM software like Parallels has shown us that we are just now at the point where VMs can handle it all and feel native. Certainly NT could use a re write to eliminate all the legacy stuff…but instead they focus on copilot and nagging me not to leave windows edge internet explorer
Of course even that isn't trivial, as one wants to share filesystem access (while I can imagine some overlay limiting access), might need COM and access to devices ... but I would assume they could push that a lot more actively. If they decided which GUI framework to focus on.