> Their goal is being able to suspend a game on your desktop and play it on your Deck will likely need them to standardize on a single platform [..] There are some additional challenges with this approach [..] but sticking to a single platform [..] removes some concerns with synchronizing lower level state across multiple platforms.
My understanding with the suspend/resume stuff they've talked about for the Deck is less about actually suspending/resuming the game by syncing some low-level state like memory or graphics, and more about quickly syncing save games via Steam cloud saves.
I only had a quick look over their docs when the Deck was announced and from what I could tell you basically opt into a new system-level save behavior where at any point the Steamworks library can tell the game "this device is going to suspend now, save your progress" then another computer can start the game and immediately load up the save file, giving the impression of seamless suspend/resume.
So as long as your save game files aren't OS specific in some way there's no reason why this should lock you into any specific platform, Windows or otherwise.
Just trying to "store the GPU state" on a system with AMD GPU and resume it on a system with a NVIDIA GPU (or just old GPU and new GPU from same Vendor) is likely around the area of "lets treat it as impossible".
Almost impossible and also rather impractical. What each game should care about are the high level details of its game state. Things like "In what map am I located and at what coordinates?" The game engine for each game basically represents a compression algorithm of sorts to decode the meaning of a few concise details like that. If you prefer to represent that information with the raw bytes and bits of internal processor and memory state, you're choosing a hugely inefficient and clunky encoding.
You're assuming a lot of things in the pursuit of a perfectly spherical game, here. If it were that simple, one might think developers would already do similar. And some do. But a lot of games have issues with this approach, particularly when negotiating data that is partially owned by the engine and that which is owned by game logic.
In reality, make game state transferable takes huge efforts.
When you are writing a program, there are a lot of states, some of them are visible to code, some of them are not, some of them are about game logic, some of them are tight to machine physical states. And you are almost not going to separate them unless specifically designed it to.
And that is called save system.
But the problem of save system is: that is slow to load, because you need to initialize everything from ground up. All the textures, stage data, enemy entities will need to be reloaded even it is technically fine to reuse all of them.
And that is what xbox do(I think steam is unlikely to copy that?)
The idea of quick resume is to sacrifice storage spaces, just store everything(even things tight to hardware) to skip the initialization process for fast loading, which is completely opposite for what steam want to do.
Until now, only xbox implemented that (on same machine only)
You’re talking about transferring many GBs of video textures and other GPU state between GPU vendors. Even across a USB3.0 interface this seems slower and an order of magnitude more difficult than just providing a mechanism to manage the save state more effectively. Look at the stuff PS5 and UE5 are doing with instant high res loading instantaneously. These are more tractable problems (in the past games like HL, which was written by Valve, show that it’s possible to have seamless instantaneous loading)
For quick resume on the same device, assuming no OS update was done, dumping memory might work.
But I just can't see it working across OSes.
Maybe if you limited it to same ARCH (or even vendor, AMD here) and dump everything but use some tricks wrt. GPU state? (And also not dumping OS discardeable caches, no idea if that is used by games though).
Even the latter, as long as you're doing something like restricting yourself to a shared subset of OpenGL.
You really just need to dump textures and set up a matching context, which at worst should require an indirection shim to translate handles if they are stored by the application but generated by the driver with no way to force custom values during object creation (like how Linux allows custom PIDs for checkpoint/restore and other replay tech like rr).
"a matching context" means having exactly the same handles for all objects (since application memory will have references to them) and with modern-enough OpenGL even means having the same (virtual) memory addresses for all GPU-side buffers as well as the same addresses for any host-side mappings. This is probably not something you could build on top of OpenGL or even on top of the existing kernel-side drivers.
Even if have a perfect wrapper that achieves all that you now have to save all application-supplied texture and shader data in the original form even where that memory could normally be freed after translating it to GPU-specific formats.
But "restricting yourself to a shared subset of OpenGL" alone already makes this unviable for anything demanding as it also means restricting yourself to the lowest common denominator for all limits including VRAM size, maximum texture sizes, essentially guaranteeing your solution to run worse on both systems.
Ehh, no need to save the application-supplied data in RAM. Also, I expect them to be typically supplied pre-compressed, not on-the-fly-converted (at least where performance matters).
The texture size issue I don't see as a problem (I thought we grew out of that being an issue years ago?), and the VRAM issue shouldn't be that hard, as games tend to not use spare VRAM for dynamic time-memory tradeoffs (unlike OS kernels with their pagecache).
On top of that, it's not like you need this functionality to deliver top performance from games that don't want to play their part to make this efficient.
Working seamlessly at "just" 30% less FPS should already be very useful.
rr doesn't need or use checkpoint/restore functionality. it's insufficient because rr already needs to catch and emulate all syscalls made by a process, and unnecessary because once you're doing that it's (relatively) easy to also emulate getpid. it also requires CAP_SYS_ADMIN or CAP_CHECKPOINT_RESTORE.
GPU is just one part, there are many more. As soon as you have network connections in play the state you'd need to restore isn't even all on the device.
Perhaps I've misunderstood but my understanding was that suspend/resume on the deck itself is recording some low level state (locally). So you can reopen the deck and be right where you left off with no loading.
But they also provide a hook so the game has a chance to save and sync to the cloud before being suspended, so the user can pick up from the same point on another steam device. Loading the game normally of course.
Interestingly, it works both ways. I recently wanted to play some oldschool Command & Conquer with some friends, and it was very easy to get these old games running in Wine, but my friends (with modern Windows laptops) couldn’t get it to work. Not even with compatibility mode.
Hardly surprising. Your typical WINE user playing C&C will notice if an update in WINE breaks it, and report it to winehq. EA don't give two shits about legacy C&C games that aren't making them money.
Microsoft used to be quite dedicated to this too. If you look at the leaked Windows XP source code you can find compatibility shims to ensure that older Versions of Quicken or Acrobat Reader or Settlers 3 still work. A "don't break userland" policy is part of the reason they became so popular in businesses. Since then times seem to have changed somewhat.
That changed because they have a major refactor starting from Windows 7. They have it up to the spec and not so much more ad-hoc behavior this time.
To me it seems like cutting the compatibility arm is worth the price of having to keep a shitty, inconsistent behavior.
Apple has nailed it so well (don't make it backward compatible) for the last decade afterall, since Apple realized one the internet is so much better and source code management (and repository, backup, disaster recovery) became more accessible so that the cost of port became much lower.
To be fair though I think there's a happy medium. For a while I really loved Apple's approach of just not giving a damn about backwards compatibility but I have to say using Windows did make me realise just how wonderful it is not worrying about whether the software I want to use will work.
It's a constant issue around me. I know people still keeping 10.6 Macs and sweating profusely at the idea that one day they'll stop working, as the tools they use haven't been ported upwards (because most often the developer of the proprietary app is dead) and will stop too ; imagine an artist whose favorite signature brand of crayons closes door.
for macOS 9 and older, yes, but not as far as I know for the first OS X releases - and running in a VM generally would not cut it (besides being extremely hard to do)
> To me it seems like cutting the compatibility arm is worth the price of having to keep a shitty, inconsistent behavior.
The ideal solution would be to have a compatibility shim (like Wine) to provide the old behavior while being able to make the native behavior more consistent.
I think this is the wrong way to be looking at it.
WINE is an open-source project that explicitly wants to run as much Windows software as accurately as possible on non-Windows systems. If something's broken, anyone can fix it.
This isn't true for the proprietary software that is Windows and C&C, albeit C&C Remastered is partially source-available now, so that could improve matters somewhat.
I think it would be interesting to package up ReactOS [1], which shares a lot of code with Wine, as a VM that could be shipped with an application along with something like QEMU. If I'm not mistaken, QEMU supports virtualization through the Windows Hypervisor Platform and macOS's Hypervisor.framework. Could be a good way for GOG to sell old Windows games.
Not to take away from ReactOS as a project, this somehow doesn’t make much sense to me, as WINE (and its spin-offs, like Valve’s Proton) is already good enough as a portable gaming platform.
I was thinking in particular about the Wine on Windows use case. Running ReactOS in a VM seems a more direct route than running Wine in a Linux VM (e.g. via WSL 2).
Edit: I also think it would have made a lot of sense for Valve to invest in getting ReactOS running on their Steam Deck hardware, so they could use that instead of Linux. They're clearly a Windows-first shop, and ReactOS is modeled on NT right down to the kernel. Having a Unix-like OS running underneath, when one really just wants to run Windows code, adds an extra translation layer and impedance mismatches that might even be user-visible.
WINE is Windows APIs running on the Linux kernel ( host OS kernel ). The Linux kernel is already high quality and high performance.
ReactOS is Windows APIs running on the from-scratch ReactOS kernel that is not very good.
The ReactOS project is not the source for most of the APIs a game is going to use. Many of the come from WINE.
Using ReactOS instead of WINE on Linux makes little sense at this point. It would certainly be a lot more expense and time to create a commercial quality product. Most of the work creating API support to WINE / Proton is portable to ReactOS of the kernel ever gets there.
About the ReactOS on Steam Deck bit: I'd suspect that this might be a bit too much for what looks like a fragile peace between Microsoft and Valve. Doing Deck on Linux is one thing (in the days of Linux on Azure it can't be much of a provocation), but ReactOS could still be too much I think. "If you want Windows, ask us for a good offer!" Technically Proton on Linux might be very similar (even the same code in key parts?), but on a subjective level, I could easily see one trigger a red line reaction the other would not.
I actually did that and solved a real problem with it (being able to run the Riven installer on a newer windows), by building Wine under SFU/SUA (I don't know if that still exists or has been removed in favour of WSL) and running it against XMing32. I never got freetype working so all the text was monospace and overflowing the dialogue boxes, but it was enough to run basic programs.
Command and Conquer and Red Alert both have DOS versions as well as Windows. The DOS one has very low resolution, but it means Dosbox is a possibility.
But all in all, I'd say go with OpenRA. My friend and I played it for a couple of weeks and we scratched the itch of the original games, with none of the hardships and a bevy of wonderful changes to the core gameplay.
Similarly, I was playing the recently released Humankind with some friends. My friends playing the first-party mac version were dealing with crashes every half hour, while I was playing through Proton completely issue-free.
> Also, OS/2 had a chicken-and-egg problem. Its best selling point was its compatibility with MS-DOS and Windows applications. However, this meant few developers took the time to write OS/2-native apps. So, why run OS/2 at all?
A more recent example of that is Blackberry 10. The way they tried to get developers was to ship an Android runtime so you can run Android apps. They even offered payments of $100 for every app a developer ported.
That just led to the BB appstore getting flooded with low quality Android apps that looked out of place and had terrible performance.
Which is not to say that BB10 would’ve succeeded if it had better apps, but it certainly didn’t help.
But with Proton/Wine it’s different because the games actually run really well, sometimes even better than they do on Windows, and it’s completely transparent to end users (unlike BB10, where Android apps behaved/looked different)
Plus, it doesn’t make a difference to the Linux desktop’s success whether devs write native apps or not. If a new business makes a real effort to develop and sell a Linux desktop product, the only thing that will matter is if customers can adopt it without losing access to the software/workflow they’re familiar with.
If they can pull that off, then competing against Windows shouldn’t be that hard; just don’t have a seething hatred of your own customers, and you’ll have a head start on Microsoft.
The only issue I see with the Proton reliance is the risk that Microsoft will do Microsoft things, and start introducing subtle API changes designed to break Proton/Wine and sabotage the competition. That’s a legitimate concern, but maybe today’s antitrust climate will make them a little hesitant to do that kind of thing? (Probably not)
> Also, OS/2 had a chicken-and-egg problem. Its best selling point was its compatibility with MS-DOS and Windows applications. However, this meant few developers took the time to write OS/2-native apps. So, why run OS/2 at all?
I don't think this is comparable to the Win32 vs. (GNU/)Linux situation. There do exist lots of people who did write Linux-native applications in the past, but the APIs of many userland libraries of the GNU/Linux ecosystem tend not to be stable for a very long time, so these applications simply did not work anymore at some point in time.
Because of the technical complications that shipping GUI applications for GNU/Linux involves, a lot of companies concluded that developing and shipping applications for GNU/Linux is not worth it.
OS/2's biggest issue was pricing. OS/2 was $200 per PC and you could only get it with an IBM PS/2. So if you were buying a clone, it would come with DOS + Windows and then you could buy OS/2. In 1994, IBM finally went for a broader market with OS/2 Warp. It was marketed as a better Windows than Windows 3.11... and it was. Then Windows 95 happened. Any thoughts of switching to OS/2 died with Windows 95.
OS/2 Warp was a day late and a dollar short. It had a weird multitasking thing where misbehaving apps could lock up the OS like Windows 3.11, even though they claimed it was better. I found the interface to be somewhat unintuitive as well.
Its not easy to "port" the OS/2 situation to the modern world. It as much possible to "run" windows software on linux with wine as it is possible to run linux software on windows with wsl.
Of course, since "linux api" is open/public/floss, wsl may have an advantage in compatibility. What is hard to determine is which side this advantage will benefit.
I would think performance and forward compaibility are the two most likely reasons instead.
Eventually DOS will die down due to its limitations (like TSR, interrupts, lack of VM, inability of handling devices in safe manner) with ever increasing hardware power.
Porting over to OS/2 ensures it will have a much smoother way of keeping an edge over the competitors, if OS/2 ever did have a sequel. Thanks, Moore's Law!
Of course, OS/2 is ill-fated due to corporate thinking and the disagreement in both sides, but the ideas are rooted deep within Windows 95/98.
Me running Google thingy does not mean I want to be spied on in every other place or on system level
(Though I am working on moving away from Google)
Anyhow, Privacy is not a boolean (Either you have it or not), but more of...
A gradient? Running Google stuff and Linux as system, is certainly more towards privacy, than Windows + Google.
It's been exhaustively denied. However, given all we know about the CIA/NSA and their activities over the years (ECHELON, AT&T room 641a, intercepting and backdooring routers, PRISM, etc., et. al.), there is no amount of bloviating from Microsoft that will make me believe NSAKEY wasn't, and isn't, exactly what it looks like. In the end, you can't prove a negative, so all you can do is believe whatever you want to believe, informed by everything else we already know.
Microsoft is positioning Windows to be a platform that can run everything, while Microsoft itself, Google and Apple position themselves to prevent apps on their platforms from running outside of their parent operating systems.
Proton is nice, but still is a wrapper 'behind' what Microsoft can decide to do to make sure they are 'ahead'.
From a strategic point of view, this is annoying for Valve.
It makes more sense for Valve to try to get more SteamOS only games (and the Steam Deck should help with that), because they control that platform end-to-end (apart from manufacturing their CPU, it's close to what Apple does: Steam store + Steam hardware )
If developers start targeting proton, then Valve will be the gatekeeper that decides which windows apis get widespread adoption in games. Contrary to what the title says, proton will become the stable win32 api. Microsoft's distribution of the proton api could then be called "upstream" or "rolling release".
> If developers start targeting proton, then Valve will be the gatekeeper
Proton is basically wine + dxvk + esync/fsync/wait_v with some community patches (e.g. using a "fake" fullscreen mode that scales up to native res rather than switching resolutions explicitly) and integration for e.g. steam input handler.
In other words they're bundling up a bunch of open source tech with sane configuration and adding a few steam integration patches. And it's all open source, on github, too. There isn't much that Valve does explicitly in regards to what windows APIs are supported, and what they do develop and fix gets submitted to upstream wine (though not everything is accepted, I guess).
I suppose, if Proton usage explodes such that the bulk of patches to wine get submitted by Valve devs, what you said would be true, but so far that isn't really the case.
I mean FreeBSD doesn't do Orbis (PS4) games either. Windows and XBox are a bit more like Linux and Android AFAIK (though UWP is a different thing, apparently).
Linux has stable userland ABI. I managed to run a gui binary from Slackware 12 (from 2007) in a chroot and it worked completely fine. It didn't have modern theming or HiDPI support but it worked fine.
Windows apps running via wine and old linux apps running via chroot or any other container (AppImage, snap or Flatpak) all work.
The problem is not having specific (major) versions of old libraries in the new OS. If you manage to install them, they will work - this is difficult in most distributions.
Linux has a stable "userland ABI" only so far as the kernel interface stays (with very few exceptions) stable. This means, as you've said, you can run old applications, as long as you have entire copies of the rest of the userland to go with them.
(One potential problem I can see with really old Linux userlands/games is old Mesa wanting the old user mode switching drivers from Linux, that are long removed. You'd need at least newer versions of Mesa, I think, which may be impossible to achieve due to dependency hell)
This comes down to arguing semantics but I don't think this really constitutes a stable "userland ABI" in the sense it's used to describe Windows. Yes, you can use containers to effectively create independent copies of old userlands and run your code in them. But you can't just run old software without matching copies of the specific libraries it expects, which themselves have a rat's nest of dependencies, necessitating having ~N complete copies of Linux userlands floating around for N programs.
One way we could circumvent this problem is static linking, since they the only interface would be the kernel one, that is mostly backwards compatible. That has its own problems (e.g. inability to patch libraries independently), but is probably better than just keeping copies of the shared object files in their entirety in containers...
Functional package managers like Nix and Guix can reproduce an exact build of a program, including it's dependency tree (build time and runtime), and package that into a container, which can be shipped as an executable image (stripped down to just the neccesary runtime deps). Nix Flakes in particular allow you to define an entrypoint, making the build+run process nearly as painless as "docker run $x"
> This means, as you've said, you can run old applications, as long as you have entire copies of the rest of the userland to go with them.
isn't that why windows can run old applications though ? every game used to come bundled with their own copy of everything down to the C & C++ runtime (and that's why they still can run)
No, not really. Win32, the bit that you use to actually put a window on the screen, is a system library that is simply backwards compatible forever. You don't distribute Win32 with your applications. Go ahead and try to find a copy of USER32.DLL distributed with your old applications; you won't find it.
The main reason Windows can still run stuff is because it has Visual Studio Redistributable packages at the OS Level. So got some software that needs VS's C++ symbols from 2008? Usually comes as part of the installation process, but you probably already have it installed.
Base system libraries like glibc, OpenGL, Vulkan, libasound, etc. all provide a stable userland ABI.
What doesn't work is relying on random other libraries to be present, but you will run into the same problem if you rely on whatever some other program dumped into system32 on windows. You also can't link against a newer glibc/etc. and then expect to run on older versions - that is not inherently different on Windows either but the SDK does let you target older versions from modern toolchains (whith limitations), something that GNU unfortunately does not care to provide so you have to make your own (or search for one).
Each glibc/ALSA soname bump can spell death to entire generation of binary software.
In theory, you can link libX11, openssl and other important libraries statically, but many developers don't do it, because of weird concerns about "security", "compatibility" etc.
Compare it to Windows, where libraries like kernel32, user32 and gdi32 remained stable for decades and statically linking to them is unnecessary by design.
The biggest problem with this is that no one is maintaining all of those old userlands, so when you use an old userland, you are reintroducing tons of known and fixed (but not backported) bugs, including many serious security issues.
The kernel ABI barely matters for most things. You'd hope libc wouldn't break. What's important is window managing APIs, graphics APIs, file system structure (your program location, user config location), environment variables, preinstalled binaries, and many smaller things (acceptable keyboard shortcuts, registering to open a specific file type, opening a native "open/save file/folder" dialog, etc). Those are a lot less stable and may require using various random libraries or writing code for specific WMs/desktops. It's not fun.
The tragedy of the Linux desktop is that its CLI is superb and Chrome works really really well. The year of the Linux desktop happened with Android but sadly all that successful GUI stuff that Google did was never able to percolate back into the FOSS world as a WIN32 alternative. It's a sad state of affairs for Linux desktop app developers, because they've had the wind taken away from them on all possible angles. The only Linux desktop app I can personally remember using in the past year (aside from the terminal and Chrome) would probably be kcachegrind. I wish that we as a community could focus more on building outstanding TUIs (e.g. linux perf, gdb --tui, emacs) since most Linux installs are headless and it makes an app more functionally useful it can be used in a terminal via ssh.
The year of the Linux desktop did not happen and will not happen, if we consider the actual GNU+systemd+curl+gstreamer+freetype+Wayland/Xorg+Gtk/Qt+.... as the desktop layer. None of those projects have shared goal of actually making a useful system with stable API / ABI. Moreover, there is already quite a bit competition inside like Qt vs GTK.
What made Android successful is Google seeing that mess and creating an entirely new operating system, from kernel to especially userspace. Android in a way turns Linux into a system that is much closer to a microkernel based OS. The binder IPC on the kernel side resembles the message passing systems of microkernels and almost all communication, except a couple of system stuff, passes through it. From allocating display buffers to "Share" buttons all of them uses this system. It has nothing to do with classical desktop system nor the Unix "way" of doing things, which is oversimplifying the system layer to the point of almost uselessness.
While the build system of AOSP and Android apps gives me shivers, the OS works beautifully and it can be continuously upgraded and be kept secure, if greedy SoC and device manufacturers released driver updates.
The CLI is just like any other UNIX/POSIX, and as far as Android is concerned, Linux does not exist for userspace coming from Play Store, including apps written with NDK/AGK[0].
Only OEMs have the ability to directly use Linux, doing so from userspace means using what Google considers as being private APIs.
Not to mention that Google could at any moment change it by Zircon, if they could so feel inclined.
So Android is hardly the best that Linux could make it as desktop.
glibc is quite good about ABI compatibility - not any worse than Win32.
X11 also has a stable protocol. Even with Wayland there is Xwayland for backwards compatibility.
OpenGL and Vulkan have stable ABIs.
Your program location doesn't matter - unless you are installed via the package manager (in which case the package specifies the location) you can load everything relative to the binary.
User config location is gained a new standard for where you should put things to not mess up the home directory but that does not break anything. Old programs can still write to ~/.yourfancydotfile.
Evenvironment variables only matter if you wrote your program for them to matter.
Preinstalled libraries are irrelevant - ship your own like you have to do on windows for anything that is not part of the base system.
Acceptable keyboars shortcuts and standard dialogs are integration concerns and not strictly needed. Not saying that you shouldn't provide DE-integration where that makes sense but that can be done on top of sane fallbacks. Also, there is now a cross-DE API for standard file dialogs: xdg-desktop-portals
.desktop files have been standardized for a long time so you should not have problems registering file types - it is of course up to the DE to manage the default and that is a good thing.
Does this work seamlessly alongside other apps on your machine? By which I mean:
• Does copy-and-paste work?
• Does the file picker dialog default to a logical location, or does it start out in some weird directory?
• If you click on a URL, does it open in your default web browser?
• Can you screen-share the application from Zoom?
I'd also be curious whether there's a performance impact beyond disk usage.
Basically, I'm wondering if the UX is substantially better than using a VM. VMs are great for backwards compatibility, but the UX sucks, because apps don't exist in a vacuum.
If chroot does work seamlessly for GUI apps, that strikes me as a perfectly reasonable backwards-compatibility solution, and better than what Windows does in some ways. Heck, distros should consider automating this when users try to launch old software.
Edit: I also wonder how much of this problem would go away if Linux devs were willing to ship statically-compiled binaries. (Or mostly-statically-compiled binaries, since I know including e.g. gpu libraries is problematic.)
Assuming the GP was using x11, then yeah it's very likely almost all of those things worked fine (depending on how they structured the chroot to include their homedir). The only one that probably doesn't is the URL->browser one, just because the chroot probably wouldn't have access to run the browser directly, even if the right MIME-type/xdg config files were present in the chroot.
X11 as a protocol if anything is probably more backwards compatible than windows has ever been, and most of the extensions that enable the things you're asking about have been available for decades. ABI and libc aside xclock from 1996 probably still works on ubuntu 22.04.
> Heck, distros should consider automating this when users try to launch old apps, if at all possible.
This is more or less what snap and flatpak are trying to do.
X11 doesn't provide anything for opening a file dialog. afaik you have to just call GTK-specific things to achieve that (hell there's a C library specifically for just opening a file/folder open/save dialog). X11 itself doesn't provide that much beyond opening & managing windows (and the copy&paste mess it provides is, well, a mess.)
Sure, but the question wasn't "will a file dialog open at all" it was "will it open in a weird place". If the app was using gtk and you're running it in a chroot with period-appropriate gtk that gtk file open dialog will probably open just fine, and then it's a matter of if you bind mounted your homedir and /etc into the chroot so it'd know where to go. From there it'll open in approximately as weird (or un-weird) a place as it did in 2007. Like, this isn't a problem almost specifically because x11 isn't trying to do anything very clever here. There's no OpenFileDialog along with an OpenFileDialogEx and OpenFileDialogExtraArgumentsFor2016Ex APIs to keep going. If your program had a file dialog then, it has one now. It's probably just ugly.
As for copy/paste, it is a bit of a mess but it's a mess that works pretty well for this kind of situation. If the copyer and paster windows are in the same x11 context and use the same (widely used) protocol over x11 window properties, they will be able to copy and paste with each other regardless of if they're running natively, in a chroot, in a vm, or on a remote machine.
Incidentally I really miss having the second "select/middle click" copy/paste context when I'm using windows or a mac. It's really frickin' useful and by far more useful than almost any other middle mouse button uses I've ever seen in any non-game. But then I also miss having normal copy/paste in the common linux gui terminal emulators so...
Isn't the Linux world (slowly) moving away from x11, though?
(If you can't tell, I'm not a Linux user, I was many years ago and I'd like to go back some day. These types of library compatibility issues are a major part of what's keeping me away—I want to know that most apps from 15 years ago will still work, and most of the apps I have today will still work in another 15 years.)
Things are moving to wayland, but x11 is never going to go completely away. Xwayland provides an xserver that can be used within wayland.
Some of the things you listed as problems would be problems anyways in wayland, because they have to do with the fact that wayland doesn't, for example, give unrestricted access to every single program's image buffer from any program (for good reason).
> Linux has stable userland ABI. I managed to run a gui binary from Slackware 12 (from 2007) in a chroot
The key word is chroot here, and managed. Sure, a technology enthusiast might manage to run binaries from 2007 on a chroot, as in, after having invested hours of work to find an outdated copy of an entire distro and then wasting gigabytes of hard drive space on making it work. But this doesn't help the average linux gamer who just wants their downloaded game from 5 years ago to still work. Nor does it help the ISV vendor who wants to sell GUI software that can be used on modern linuxes as well.
I think you're missing the point. This is all still a terrible user experience for the stated consumer: a relatively non-technical person who found an old game binary and just wants to run it.
The only acceptable user experience for this sort of thing is "double click on the icon and it just runs".
sure but it's not hours either - one could reasonably whip a small GUI with dialog box like window's compatibility one where you can run a particular software in whatever distro of yore
Linux has a stable ABI but only if you don’t include all userspace libs isn’t really that much of a guarantee. Sure it means you can run ancient distros on newer kernels provided they don’t depend too much on the specifics of X11 or OpenGL but that’s not great unless we’re happy with “oh this software works on Ubuntu 14.04 so we’ll just ship all of a Ubuntu 14.04 userspace with the app.” (looking at you Steam Runtime)
Unless you depend on a central service like X11 where only one version runs on the whole system and newer implementations/replacements may intentionally break things.
Last time I had to run some ancient software on Linux I just spun up a Virtual Box image with the Linux version it was released for (of course all the mirrors for that where offline by then too).
Yeah have to agree there is always a vocal group of non devs who hate when they have to install a 3rd party compat library to run something from 10 years ago. To them this means it's broken when in reality this is bloat for the other 90% of the distro userbase so isn't in a default install for good reason. (Not to mention some clunky apps dropping back to old support when they see qt3 even when they support say qt5 in their UI...)
> The problem is not having specific (major) versions of old libraries in the new OS. If you manage to install them, they will work - this is difficult in most distributions.
My experience was that often it is the opposite problem. Games came with bundled libraries of old versions (e.g. libSDL) that do not work properly with the rest of the OS. When they are forced to use system versions of libraries, they started to work.
Oh the linker needs a lot more than a massage; The linker wants full service.
When a user needs to run a program, but the inconsiderate lazy dev only built, packaged, and tested the program on 24 ABIs (4 versions of 6 distros) rather than 25 ABIs like he should have, this is what the user wants to see:
/data/source/vcpkg/installed/x64-linux/debug/lib/libglib-2.0.a && :
/data/source/vcpkg/installed/x64-linux/debug/lib/libglib-2.0.a(gmain.c.o): In function `g_get_worker_context':
/data/source/vcpkg/buildtrees/glib/src/2.52.3-34a15219ec.clean/glib/gmain.c:5848: undefined reference to `pthread_sigmask'
/data/source/vcpkg/buildtrees/glib/src/2.52.3-34a15219ec.clean/glib/gmain.c:5853: undefined reference to `pthread_sigmask'
/data/source/vcpkg/installed/x64-linux/debug/lib/libglib-2.0.a(gregex.c.o): In function `match_info_new':
/data/source/vcpkg/buildtrees/glib/src/2.52.3-34a15219ec.clean/glib/gregex.c:588: undefined reference to `pcre_fullinfo'
/data/source/vcpkg/installed/x64-linux/debug/lib/libglib-2.0.a(gregex.c.o): In function `g_match_info_next':
/data/source/vcpkg/buildtrees/glib/src/2.52.3-34a15219ec.clean/glib/gregex.c:734: undefined reference to `pcre_exec'
/data/source/vcpkg/installed/x64-linux/debug/lib/libglib-2.0.a(gregex.c.o): In function `get_matched_substring_number':
/data/source/vcpkg/buildtrees/glib/src/2.52.3-34a15219ec.clean/glib/gregex.c:1076: undefined reference to `pcre_get_stringnumber'
/data/source/vcpkg/buildtrees/glib/src/2.52.3-34a15219ec.clean/glib/gregex.c:1079: undefined reference to `pcre_get_stringtable_entries'
/data/source/vcpkg/installed/x64-linux/debug/lib/libglib-2.0.a(gregex.c.o): In function `g_regex_unref':
/data/source/vcpkg/buildtrees/glib/src/2.52.3-34a15219ec.clean/glib/gregex.c:1262: undefined reference to `pcre_free'
/data/source/vcpkg/buildtrees/glib/src/2.52.3-34a15219ec.clean/glib/gregex.c:1264: undefined reference to `pcre_free'
/data/source/vcpkg/installed/x64-linux/debug/lib/libglib-2.0.a(gregex.c.o): In function `g_regex_new':
Clear as mud.
Of course you could use the buggy 16 month old version from the package repo.
Maybe Linux should just agree on a standard runtime like Steam has. That can be free and open and cross-distro, and be based on "API Levels" like anything else, so your app can target "API Level 27", and always be usable because API27 never changes, you just make a level 28 next year.
Eventually people would get tired of patching old ones, but at that point you just freeze them and have people use a VM to avoid the security holes.
Windows has compatibility mode for older windows. And it's great.
congratulations, you have reinvented https://en.wikipedia.org/wiki/Linux_Standard_Base, which is dead. it's not a terrible idea, but had a number of fundamental problems. first, it was a massive undertaking primarily driven from the "enterprise" side with minimal support in the community. the former isn't necessarily an issue, but the latter was a death sentence for a supposedly comprehensive standard. furthermore, the goal of LSB is fundamentally at odds with the extensible and decentralized nature of Linux; full adoption would destroy most of the value of Linux as a customizable system.
funnily enough, though, Red Hat failed with LSB, but systemd is, for better or worse, accomplishing much of the same goals, causing many of the same issues of poor extensibility and customizability.
Linux seems to be a lot more enterprise dominated now, so perhaps LSB could make a comeback.
The new trendy solution seems to just be having different tiers for devices that can't handle the full thing.
Customizability itself is, to some degree at odds with pretty much all of what the majority of users who are not hobbyists want, so I'm not sure it's the best plan to prioritize that.
On the other hand, the current solution of pretending that only Ubuntu and Red Hat exist and leaving other distros to figure out the compatibility for themselves seems to be a slight less restrictive de facto standard that works well for most.
Quick search shows Debian has a Flatpak package, which feels like the right amount of involvement. Ubuntu installing Snap packages via apt is exactly the behavior I don't want.
Or FreeBSD, or virtually every other OS out there. Honestly I do wonder why Valve chose Linux over FreeBSD for this use case; for a single-purpose machine FreeBSD offers a far more stable baseline.
FreeBSD has poor hardware support; the modern graphics acceleration drivers are simply ports of the Linux kernel code. Sure, Valve could port or rewrite the drivers for the limited set of devices they support, but why would they? Valve supports Linux not because of its technical advantages, but simply because it helps mitigate some of the Microsoft- and Apple-related business risks. Linux is undeniably a safer, more conservative business decision than FreeBSD.
> the modern graphics acceleration drivers are simply ports of the Linux kernel code.
Does that matter? If there is a stable driver that's what's important, and FreeBSD has if anything a better track record for maintaining drivers for longer, even if those drivers were originally ported from elsewhere.
> Linux is undeniably a safer, more conservative business decision than FreeBSD.
I disagree; safe and conservative is exactly what FreeBSD does better than Linux.
Safe and conservative isn't what's needed for gaming.
For example, a lot of the tricks that Proton relies on explicitly use patches to the Linux kernel to do with FUTEX_WAIT implementation, which have only been patched into mainline since 5.16 afaik, as well as bleeding edge Mesa/xorg versions, the current state of things is that it makes sense for them to base on a rolling release where they can frequently push system images with bleeding edge versions to the deck hardware.
Maybe a few years down the line it'd make more sense to base on something more stable like BSD, but things are moving very quickly in Linux+wine gaming support and it's actually helpful right now to be on experimental/unstable versions of certain things.
Tech almost always reaches a good enough point eventually. SSDs made electron apps good enough and IMHO somewhat made lightweight kind of an outdated idea.
Bullseye is the good enough point for basically everything other than gaming on debian distros, not much you can't do with juat the bullseye repos.
Now that Valve has decided Linux gaming is a real thing, within a few years Debian will probably be good enough for that..
Not that it will stop them from constantly changing stuff in breaking ways for 3% better performance....
It was _their_ hardware, Sony didn't care about generic hardware support, since they need to support their own only, and only they know how it works. Not at all the same thing.
The only thing Sony needed from FreeBSD was x86 support and generic kernel facilities such as memory, process and IO management. Everything else they wrote themselves for their proprietary hardware and userspace.
Steam (or whoever at this point) could bring together the efforts put in static linking of Linux apps and combine it with some sort of app isolation (freebsd like "jailing"... maybe doable with lxc?) so that a game binary once linked doesn't need anything other than the kernel and the isolation keeps it from affecting other apps in case it is exploited because of vulnerabilities in statically linked parts of the binary.
This way, we could have forever running apps if that is what we want.
I mean, Steam installed from Flatpak (app isolation) running Windows games targeting a specific Proton version (static set of libraries inside another layer of isolation) is basically what you're asking for, and it already out there.
20 years ago I was developing software for Windows 98. Recently I tried to run some of the programs on Linux and they still all ran fine unchanged.
But the things I wrote for Linux need permanent updates because they break some function change. Especially major versions changes qt4 to qt5 or gtk2 to gtk3...
I was thinking someone should develop a Wine-like layer for Linux ABIs themselves, translating more historic ones into modern ones. That will allow running old native games for a much longer time.
In that sense I agree that Wine provides a benefit of preservation and very long term support that native Linux ABIs don't.
I completely disagree about regular desktop applications though. Games use case is different becasue they are developed as one time project and game companies move to the next game abandoning support for the old ones in essence after a relatively short time. Such kind of programs need stable ABIs to be usable long term.
Desktop applications on the other hand (like browsers for instance) are evolving and are actively developed continuously. They don't need Wine on Linux. I'd simply stay away form those desktop applications which are abandoned.
We might end up in a world, where all the command line applications are Unix/Linux (and run on Windows via WSL) and the GUI applications are Win32 (and run on Linux via Wine)
This would be playing to their strengths. There is a much nicer command line/shell ecosystem in Linux and a much nicer GUI ecosystem in Windows.
Unfortunately the GUI world seems to be settling on web rendering. While this is a massive bonus for cross platform support I’m still having a hard time agreeing that running a dozen Chromium instances is a good thing.
I don't think Windows has a nice GUI. It is incredibly bloated.. and Windows has been pushing UWP apps for some time. Please no.. Linux already has an influx of Electron apps, we don't need Windows UI.
UWP has been unofficially done for a few years now (no official statements, but you could tell from the lack of updates) and they officially shut the door on future updates 4 months ago.
It’s actually kind of insulting, because if you were one of the few developers to believe Microsoft and write, say, business apps in it as they said, you just got the short end of the stick.
It’s not a huge surprise why Microsoft can’t rally developers behind anything.
Yeah, it’s a mess. It seems like eventually they figured out that UWP’s sandbox+packaging requirements were a huge mistake (see Project Reunion/Windows App SDK), but that realization came much too late.
I developed for UWP, though during the early days when windows phone 10 was still a thing. Honestly it wasn't the sandboxing that turned out to be the biggest problem (see all electron apps nowadays) but that the UWP Plattform was just shit in every way possible, from the design language that forced a desktop to become a phone, the store that never got any traction or quality support, the SDK that needed at least 3 more years of hard work and lots of extensions to become useful but most importantly: The terrible monetization. Microsoft entered a market with android and iphones making developers and companys a lot of money, yet on windows phone and UWP Apps on Desktop it was impossible to make any decent amount of cash.
Yeah, I'm fond of the AppContainer sandbox in a way; I think most consumer applications can get by in a sandbox (my UWP app could), and sandboxing applications can be empowering for users since it lets them run more code without worrying about trust.
Unfortunately, the execution was a mess. Not having any way to opt out of the sandbox was an insane decision and it led to crazy workarounds; even first-party Windows software has to do things like ship a sidecar exe to bypass the sandbox (the Store does). And troubleshooting issues with the sandbox was a pain, configuring capabilities was a pain, and there was very little in the way of iOS-style user-facing permission controls.
And don't get me started on the marketing; MS flacks kept telling devs how sandboxing benefits users while doing virtually nothing to convey those benefits to users.
Not the first time they had done this. My first job was actually developing a replacement for a Silverlight App from a company heavily invested in ms/.net/c# in JS/HTML5. They were super heavily invested in Silverlight (easy decision as most of their codebase was WPF/C# already) and in the end it bit them.
It's quite interesting. I'm not sure that broad use of Win32/Wine for all types of GUIs would even work well for things like apps communicating with the broader desktop, and accessibility -- never mind that the desktop software stack is already a crumbling tower of complexity.
But I can see it being attractive for full attention/fullscreen apps (like games) that don't use those things or by design bring their own implementations for those things, where the benefit is the reduced decision space for the developer, at the cost of the increased complexity. However, as the article states, ports are often using "compatibility layers" that are just worse versions of Wine though, so for such quasi-ports, there really is no increased complexity in using Wine instead.
That is why so few bother to support Linux OEMs, having some kind of POSIX support is enough for the large majority.
However I still disagree, structured shell with REPL like capabilities like PowerShell are so much better and so much closer to the Xerox PARC experience.
Not the GP but generally speaking it’s not a great reach to say that Linux is more widely supported than POSIX. GNU-only flags are often used in shell scripts and you’ll often find binaries are packaged as a snap or Docker container thud anyone running a non-Linux system is left to compile that project themselves.
Even WINE itself doesn’t officially support all POSIX-compatible UNIXs.
Yep. WINE is a second-class citizen on any platform that isn't Linux (as is most POSIX-like software these days), and requires special patches just to run on MacOS (and even paying for Crossover won't give you 32-bit binary support on Mac).
For better and worse, the POSIX spec is dead. Besides a few niche use-cases where compatibility with archaic software is necessary, Linux is the new standard for this stuff.
I don't mean it in a literal sense, though I get where you're coming from. Think of it this way: your server software's stability is more or less defined by how well it conforms to Linux compatibility. Even software that is designed with perfect POSIX compliance won't run reliably on the majority of servers. Linux, despite not being a standard, has become a spec for server software.
The kicker is that nobody can really undermine them unless you can create a true alternative with a more compelling license than GPL. If our industry wants to return to a truly standardized status-quo, we need to put aside our differences and collaborate on a next-gen spec that works. Considering that we barely did this in the 80s with POSIX, I seriously doubt today's developers and researchers are capable of doing it again. I'd love to be proven wrong though.
> After all, we all know the Year of the Linux Desktop is around the corner
It's been a while since I've laughed out loud like when I read that.
and its not an insult to linux, but I've been hearing about the year of linux on the desktop like I've been hearing about the apocalypse. Believing of the year of linux on the desktop is the same as believing in Nostradamus.
Just like Linux only started getting all the money from IBM back in 2000, because it was cheaper than Aix and a good way for IBM to disrupt Microsoft offerings in the enterprise.
Just my little anecdote: I needed to run an old version of Eagle CAD and I wasn’t sure how to get old Linux versions to run. The old windows version worked great in WINE.
I'm not quite sure what you mean. I have a valid license key for that version, but I couldn't figure out how to run the installer. I don't recall exactly what the problem was, but I think it required an old version of a library that was not available in my package manager.
How do we fix this? A lot of good analysis of WHAT the problem is here (userland libraries have no "stable API" guarantee, and in fact revel in breaking backward compatibility in some weird sense of spite for people who don't update). Maybe Linus can release Linux II and say "everyone shall use the GTK4 and wayland 1.0 API or you can't use the Linux II trademark"
Then again, maybe it doesn't need fixed, since it seems to be working great. Distros just need to ship with WINE preinstalled and better-integrated. Hell, write an entire distro using win32 apps.
If you can't convince people to use your solutions because they sufficiently better than the seventeen different things actually want to do with their own time on their own machines then trying to abuse trademarks to force them to use what you would prefer is contrary to the ideals of any free software camp you care to name. You would be laughed at equally from all sides and find no users for Linux II save for rebrands that removed all mention of the toxic mark.
It would be trivial to go on about all the ways and means that this couldn't possibly work but that would be almost besides the point. The fact that you want this to work is by itself problematic. It's a toxic sort of entitlement to a universe that is a little less complicated and a little less free in order to be a little more convenient and its so problematic that the universe is better off without anything that might be contributed in such a spirit.
This effectively sums up the difficulty in targeting Linux. The concept of any sort of consistency and reliability is immediately dropped. Nothing will ever work the same on two different Linux computers and there's no way to tell if it would, because even compatibility branding is an assault on freedom itself.
Are you seriously calling a single individual using a trademark to tell everyone downstream what software they are allowed to distribute with Linux in order to force everyone to run the software you prefer "compatibility branding". That is dystopian.
Linux isn't a sub project of gnome nor will anyone accept it being run like one and why would they? People and projects own their particular contributions they don't own the community and the right to tell others how to use them or their own hardware. Developers of a particular project can rightly say nonpaying users have no right to software they paid nothing for working in a certain fashion but likewise by throwing their software to the wind they have acquired no consideration in return. This means they don't owe you not speaking negatively about said changes, they don't own you not forking, they don't owe you not replacing your software with software that works how they would prefer.
People are liable to experience disappearing functionality,software,options as a loss and react accordingly. If a change would be experienced by users as a loss that is paid for not with tangible benefits but with for example developer productivity or satisfaction its unlikely to garner users. Commercial software that is sold outright version on version promotes understanding this. Nobody ever sold version n+1 with the idea that you would accept less features in exchange for a simplified code base unless the software is being sold by developers to developers. End users will never make such a trade. Developers of such software know that they have in fact to compete with their own software n+1 needs to be tangibly better for end users.
For example if Wayland were actually desirable AS IS and successfully communicated to its intended users considering what OS, OS version, and software then more than 15% will be using it. 15% update 14 years in is either a failure to be meaningfully better in ways that people actually care about or a failure to communicate.
Instead of wishing you could make people use it just keep improving it until people are willing to adopt it.
Too bad win32 is such a finicky and verbose API that's painful to use for anything beyond the simplest of dialogs. I tried. The MSDN documentation doesn't quite cut it. TheOldNewThing by Raymond Chen is a must-read to get it right.
But above all, a stable ABI is only needed for proprietary programs which I abhor for not being FLOSS. As a programmer, I really do think the 4 freedoms are important and loathe anything non-compliant.
Now I'm curious, what exactly were you making where the win32 documentation was insufficient, but the "one weird trick" blog TheOldNewThing was helpful? Did you buy any books on win32 or check stackoverflow?
I've found win32 to be a bit hard to find documentation on sometimes (the official documentation is complete, but sometimes know which hook or function to use is up for debate, stackoverflow has good answers there), but overall it's a very good API that gets out of the way and lets me just build.
The message loop (using MsgWaitForMultipleObjectsEx) was somewhere around 20 LOC, to enable all the things that the application had to do (including correct handling of dialogs). I don't count that as getting out of my way.
> "Too bad win32 is such a finicky and verbose API that's painful to use for anything beyond the simplest of dialogs. I tried. The MSDN documentation doesn't quite cut it. TheOldNewThing by Raymond Chen is a must-read to get it right."
If it's helpful, I'd like to mention that Charles Petzold's "Programming Windows" has been pretty much the reference for writing Win32 apps since the '90s. (It's the same person that wrote "Code" and "The Annotated Turing".) You probably want the 5th edition as that's the last pure Win32 version; the 6th edition seems to be oriented towards C# based WinRT apps.
I got started with Win32 programming 2 years ago using the 5th edition of Programming Windows. It's great, but it did feel like it was missing a few things. For example, it doesn't cover disk I/O at all.
I found that Pavel Yosifovich's "Windows 10 System Programming" was an excellent complement to Petzold; it fills in a lot of gaps, and it's quite recent which is nice (there were some bits in Programming Windows that would be dropped if it were rewritten today).
I mean, that's debatable. But in any case Xlib, Xaw, and Motif can be completely bypassed nowadays--XCB and Wayland are complete replacements for Xlib (with EGL finally providing the needed XCB-compatible alternative to GLX), and GTK+/Qt are alternatives to Xaw and Motif--while Win32 is still the only way to do many things on Windows. Granted, the vast majority of devs use wrappers around Win32, so they don't have to care about the low-level plumbing.
Which have very sparse documentation, but do work just fine.
Win32 covers a lot more than xlib, athena and motif. And still I have to resort to porting over bits and pieces from the FreeBSD C library because in a pure win32 project (no msvcrt), things are missing.
The thing is, getting all the details right requires extensive research and a lot of extra nonsense that shouldn't be necessary. For compatibility, accessibility, consistency, etc. Sometimes it's amazing things work at all.
Running Win32 UI code as WinForms through .net is a lot less painful than doing straight Win32. Not very good for deploying a tiny program obviously though.
There's been a lot of work happening to make WinForms trimmable. I think it will land in the upcoming .NET release (7) or the next one, I'm looking forward to being able to ship small WinForms apps with zero dependencies. https://github.com/dotnet/winforms/issues/4649
This was the same conclusion I came to as well. Both for my personal projects, and what I now recommend at work when the subject of potential Linux ports comes up, is that if WINE is good enough to run it why not just do that?
I'm fine with this when it's some freestanding native-compiled C+DirectX application.
But it really irritates me when the Windows version is, after all, just a bunch of scripts loaded into a scripting runtime (think e.g. Love2D, or RenPy); but then the first-party macOS/Linux downloads turn out to be WINE wrappers wrapping the Windows version of the scripting runtime, rather than using the macOS/Linux versions of the scripting runtime.
I get that people don't always have multiplatform development setups; and that there are easy-to-use tools that "wrap" a Windows application into macOS/Linux versions. But these runtimes usually also have cross-"compilation" options that are just as easy to use!
As a Linux user, I'm generally fine with that "WINE support" mentality, as long as the application isn't demanding (like an NLE, photo editor, DAW, etc). There's a ton of Windows-only tools that I have no problem using on a regular basis, like Lunar IPS and Winamp.
Is there still as yet no Linux-native reimplementation of Winamp? (in the sense that it's a lightweight program that 1. plays music and 2. loads Winamp skins + faithfully renders its UI using them, presumably being designed from the ground up primarily to use Winamp skins.)
That's doubly-surprising, given that there's already a web reimplementation of Winamp (https://webamp.org/) that does those things. (And it's "lightweight" in terms of not pulling in any JS frameworks; but not especially "lightweight" in terms of you needing a full web browser to run it.)
I don't really use Winamp through WINE very often, mostly just a party trick for disenfranchised Windows users :p
If you're looking for a no-frills native Linux audio player, I recommend Quod Libet[0] wholeheartedly. Nothing fancy, but it has loads of organizational features bolstered by a rock-solid GTK3 codebase. Genuinely have no complaints with it, I'm really impressed with how polished and feature-complete it is.
Might be a crazy idea - but what do people think of repository owners compiling from the source code? Like Valve / Canonical could do it and put out binaries. Obviously there would be agreements about the security/management of the source code. Maybe it could even be done cryptographically so that no source code needs to be visible outside some kind of building machine.
Seriously the title is like saying that x86 is the only processor for computing we need to embrase it...
There are goods and bads to supporting the win32 abi/api with ALL of its quirks and bugs. But most of this (99%+) is trying to support dead projects that will get no support or maintenance even if the bugs turn out to be client side misuse of odd/undocumented errata behaviour.
This is a fundamental shift in culture between, release and maintain VS release fire the devs and cash a cheque. Very few closed sourced Windows projects will _ever_ get even the level of updates you get from VMware or Nvidia products during their lifecycle on Linux and these are companies betrated for being mere months behind mainline kernel developments...
One example: Descent2 XXL open source game engine that can run the classic game if you have the levels of Descent 1 and 2.
In Windows, it works just as it worked in Windows XP, 7 and so on. Many years after, it still runs.
In Linux, any distro not 10 years old, it is basically impossible to make it compile. No binaries can be found that work in any Ubuntu or Red Hat that people actually use in 2022.
Whoever said static linking is bad never thought about these scenarios.
That's why both Rust and Go are using static linking. The future of not-ridiculously-slow software is going to be statically linked.
In a modern computer with modern memory, we should be using the RAM for static linked executables instead of bloated Electron runtimes.
> The future of not-ridiculously-slow software is going to be statically linked.
If you are running MS Teams, Skype, Discord, and Slack all on one PC, and have Chrome tabs open... how many WebKit/equivalent runtimes is that bundled all together (assuming all of the chat apps still use Electron aka WebKit)?
Well, they could go the whole way and also port WinRT to Linux and MacOS! We can then have proper desktop cross-platform native apps without the electron baggage.
> Linux users tend to be less picky about any possible UX issues induced by Wine. They’ll tolerate a lot of flaws if it means they can have your application at all.
The article doesn't mention Docker format, which solved this for server-side apps. It seems like that's an example of the Linux ABI getting used on other platforms?
I don't understand the whole thing. Instead of working on getting game developers releasing their game to run natively on Linux, people are doing brokenly trying to get things run on Linux with a pile of crap.
Except that it's (1) not a pile of crap (it actually works unreasonably well, and keeps improving) and (2) building for one platform and having third party tooling make that work on other platforms means you are not spending five, ten, or even more times as much on purely supporting additional build targets.
No sane PM would go "hey let's take a huge chunk of our budget and spend it on native builds and QA for at least five different-enough-to-all-have-their-own-bugs linux platforms in addition to Windows, rather than spending it on actual game development for a single platform".
You realize that Linux has <2% market share across all distros and versions, each of which needs a different binary, right?
“We shipped Planetary Annihilation on Win, Mac, and Linux. Linux uses we're a big vocal part of the Kickstarter and forums.
In the end they accounted for <0.1% of sales but >20% of auto reported crashes and support tickets (most gfx driver related).
Would totally skip Linux.”
“By the end of my time at Uber I believe very nearly 100% of both crashes and support tickets actually for the game were still Linux related, even after significantly engineering time. Way more Linux specific time put into that project than any other platform.
And again, that was for a tiny fraction of the users.
Adding Linux support ended up likely costing Uber hundreds of thousands of dollars for a few hundred dollars in sales revenue.”
Have you ever worked in the games industry? Anything that has to be done costs money and needs to be very well justified. Making working Linux builds and testing them is not just switching a compiler flag.
It's a multi-week project that needs to be funded and takes away developer resources from actually making the game good, for an absolutely miniscule userbase that could play your game just fine without any drawbacks in Proton, for free. Valve pays for it because developers can't/won't.
Even if you fund a Linux port, it basically ends up abandoned because there is no money to maintain it - updates will only go to Windows. There's even games that don't have working DLC in their Linux-native versions.
I suppose you would then statically link all gpu drivers into your graphical application? And whenever a new gpu or a new driver is released, you will recompile your application. In case a new driver drops support for older cards, you would then include multiple versions of the driver in your binary. Naturally your binary will be several hundred megabytes large. An interesting idea. I was not aware that proprietary drivers even allowed this.
Graphics drivers is one of the rare if not the only pain point with static linking. The reason is that drivers in general should be part of the kernel but GPU drivers are running in userspace. Proprietary blobs, fast evolving hardware and as a result fast changing APIs are to blame.
But on the other hand high level GPU APIs are pretty stable and in general backwards compatible. A program from the year 2000 that only links against OpenGL would still run today (with x86 libs obviously). This means to static link as much as possible still has value.
Any distros following this philosophy? Apart from solving the constantly breaking packages problem it sounds like it would also have the fastest installs (at the cost of increased disk usage, though perhaps fs-level (block-level) deduplication could offset this?
I've always preferred Windows binaries for closed stuff to Linux ones. If you're not going to open source you thing then don't even bother developing Linux Binaries, we'll just run it in Wine.
My understanding with the suspend/resume stuff they've talked about for the Deck is less about actually suspending/resuming the game by syncing some low-level state like memory or graphics, and more about quickly syncing save games via Steam cloud saves.
I only had a quick look over their docs when the Deck was announced and from what I could tell you basically opt into a new system-level save behavior where at any point the Steamworks library can tell the game "this device is going to suspend now, save your progress" then another computer can start the game and immediately load up the save file, giving the impression of seamless suspend/resume.
So as long as your save game files aren't OS specific in some way there's no reason why this should lock you into any specific platform, Windows or otherwise.