I think the premise is wrong when there still doesn’t exist a core set of frameworks that are abi stable on Linux. On competing platforms there are way more frameworks out of the box (CoreImage, CoreAudio, CodeML, SceneKit, AppKit, etc) and they don’t break as often.
I know in Linux they have fun things like snap and flatpak but it is really solving the problem using a bit of infrastructure and package management instead of doing it in the frameworks and programming languages themselves (which are what you are asking people to write apps in).
The Linux enthusiast community actively fights against anything like this because they want everything to be modular and made to fit specific applications.
Linux does have a de facto set of standards, they're not quite as stable because they change them and deprecate old stuff, but it's better than it looks, and with Snaps you can at least partially solve the issue.
But people choose distros that don't have those standards, and then you lose half your potential users if you don't support all the niche configurations.
The so-called "Linux enthusiast community" is better described as the Linux corporate enterprise community. Understanding this makes your comment make a lot more sense. The "specific applications" are in fact merely the priorities of the giant corporations who fund the overwhelming majority of Linux development for the purpose of accumulating profit.
That's certainly not my experience. Practically every rant about systemd includes the idea that evil corporations are trying to force it on everyone rather than letting them use some random login daemon whose last commit was in 2008.
Corporate interests are generally biased in favor of avoiding hyper-specific modifications that mess with their own economies of development scaling, not seeking them out.
The corporate enterprise community is who appears to create most of the attempts at user friendly, one size fits all, stable standards, because.... that's what seems to sell, and what saves development budgets not having to support lots of different OSes, as Windows has shown.
Many of the non corporate hobbyists are fine with everything needing tweaking and maintenance, they chose Linux specifically because they want to tweak stuff.
Not sure what point you're trying to make but the "non corporate hobbyists" are ineffective to the point of irrelevance when it comes to Linux core development. Everything they do is downstream from the influence of giant corporations.
>Win32 (via Wine or Proton) is the most stable target for Linux right now
Tangential, Winamp 2.xx from the '90s runs and plays MP3s just fine on Windows 11 today. There are better apps for that today, but I still use it because nostalgia really whips the llama's ass.
Pretty wild that the same thing is not the norm in other OSs.
Even wilder is that I still have my installed copy of Unreal Tournament 99 from my childhood PC copied over to my current Win 11 machine, and guess what, it just works out of the box, 3D graphics, sound, everything. That's nearly 25 years of backwards compatibility at this point.
It really is mindblowing that Windows 11 is still capable of running 32-bit programs written for Windows 95, that's 28~29 years of backwards compatibility and environmental stability.
If we look back to programs written for Windows NT 3.1, released in 1993, and assume they run on Windows 11 (because why not?) then that's 30 years of backwards compatibility.
Did I say mindblowing? It's downright mythological what Microsoft achieves and continues to do.
There's no guarantee that all the older apps from the Windows 9x/XP days will work today, as some apps back then, especially games, made use of non-public/undocumented APIs or just straight up hacked the OS with various hooks for the sake of performance optimizations. Those are the apps guarantee not to work today even if you turn on compatibility mode.
Personally I've had little luck with even running XP applications on Windows 7. More generally, going by the difficulties experienced by many companies and organizations in the transition from XP->7, it's hardly an isolated problem.
Perhaps Windows maintains the best backwards compatibility of any mainstream OS, however I would hardly describe it as "mythological".
The most fascinating example for that is SimCity: They noticed it didn't run under Windows 95 as Windows 95 reused freed memory pages, but SimCity did a lot of use-after-free. This would have been aborted by Win95. Microsoft developers however knew that people would blame Microsoft not Maxis, thus added an extra routine I. The memory manager, which detected SimCity and then didn't reuse memory as much.
I don't want to estimate how much such hacks they accumulated over time to keep things as compatible as they could.
Linux can do this; binaries from the 90s work today.
Something like xv (last release: 1994, although the binaries were built against Red Hat 5.2 from 1998) still work today, and the source still builds with one very minor patch last time I tried it.
And Windows has exactly the same problem, but the tradition is to ship these things with the application rather than just assume they're present on the system. And you can "fix" it by getting old versions, or even:
You'll probably run in to trouble with PNG and JPEG files, but e.g. loading/saving GIF and whatnot works fine. Note how libc and libX* work out of the box.
tl;dr: much of the "Windows compatibility" is just binaries shipping with all or most dependencies.
Much of the Windows compatibility is "just" stable API for Windows controls, GUI event handling loops, 3D graphics and sound (DirectX). Linux has stable API for files and sockets (POSIX), but that's all.
And I am saying you don't need to rely on any of that. You can just ship it yourself (statically link, or use LD_LIBRARY_PATH). That's what Windows applications that rely on GTK or Qt do as well, and it works fine, which works well, and it works fine for Linux too. The basics (libc, libX, etc.) are stable, and the Linux kernel is stable.
And this is what Windows does too really, with MSVC and dotnet and whatnot redistributables. It's just that these things are typically included in the application if you need it.
It's really not that different aside from "Python vs. Ruby"-type-differences, which are are meaningful differences, but also actually aren't all that important.
Stop spreading FUD, X and OpenGL have maintained stable ABIs. There is Wayland now but even that comes with Xwayland doe maintain compat.
Sound is a bit more rocky but there are compatibility shims for OSS and alsa for newer audio architectures.
Stop claiming that I'm spreading FUD and show me at least one Linux app which was compiled to the binary code in 1996 and exactly that binary code still runs under modern Linux desktop environment and has similar visual style to the rest of builtin apps.
Got no counterexamples? Then it's not FUD at all, rather a pure truth.
> It's downright mythological what Microsoft achieves and continues to do.
This seems like it was meant in a positive way, but I really don't think that if compatibility with your system requires "mythological" efforts, that should be seen as a good thing for your system.
It's also worth noting that backwards ABI compatibility only masters when people limit their software by not distributing the source. Early UNIX software can run fine on modern GNU by just compiling it.
> It's also worth noting that backwards ABI compatibility only masters when people limit their software by not distributing the source. Early UNIX software can run fine on modern GNU by just compiling it.
Have you ever tried building decades old programs from source? It's not as easy as you claim.
Here's source for grep from v6 unix. I'd be interested to know the smallest set of changes (or flags to gcc) needed to get it to compile under gcc on linux and work.
Note that this is not a fair comparison since that code is almost 50 years old, predating ANSI C and before even the 8088 existed.
Still, there is only one actual error in gcc 13.2.1 which is the use of =| instead of the later standardized |=. I'm not sure if that was a common thing back then or if it was specific to their C compiler. Either way, I don't think gcc has a switch to make it work. Switching that around in the source gives linker errors since it seems back then the way to print to or flush a particular file descriptor was to set a libc variable and then call flush or printf. If you had the right libc to use with it that one tiny change might be all you need. But you would likely need to set up a cross compile to be able to use the right libc.
My understanding is that most of the compatability issues on Linux are due to not having the right libraries rather than the kernel not supporting older system calls. It is just a lot of not that fun work to keep things working and no one is that interested (instead, some people just use decade old versions of Linux :/ and the rest use package systems to recompile stuff). NetBSD had better practical binary compatability for a long time, although I think some of it was remove fairly recently since there isn't much commercial NetBSD software (there was one lisp binary from 1992ish IIRC that some people were still using and I think that compat was kept).
Thanks for the details, I did give compiling it a try and saw some of the things you mention, but my knowledge of C, pre-standards C, and libc wasn't enough to fully make sense of them. I'll agree it looked better than I expected; I've seen much worse cases of single-decade old programs not compiling/working (one in Haskell and another involving C++ and Java, although they were much larger programs).
But I don't think it was unfair to use that as an example of early Unix software, or to point out that it was harder than "just compiling it". One defense against my argument could be that a version of 'grep' has been maintained through C versions and operating systems, with source code availability playing a part in that (although presumably GNU grep avoided using Unix source code).
My extremely limited experience with Java is similar, it seems to do much worse than C at compatability over time. Of course you are right that it is a fair example of early Unix code, I just didn't read the comment you were actually replying to carefully enough :(. Pre-standard C has more issues, so I don't think that is fair vs Windows but that is not what you were replying to. Source availability gives some additional options that you don't have with only binaries but I think you are right that without it being open source the use is limited (and potentially negative, like how the BSDs were limited by the USL lawsuit in the early 90s). Useful open source software is likely to be maintained at least to the point of compiling, though it may take a while to break enough for someone to bother (sox is in this middle stage right now with package systems applying a few security patches but no central updated repository that I know of).
What is? An immense amount of resources (developers) poured into developing live patches to make applications work on each newer version of Windows (or helping the application developers fix their applications). It's an interesting conceptual grey area - I don't consider it backward compatibility in a strict sense.
This is documented in the book "The old new thing" by Raymond Chen (it's possible also to read the blog, but the book gives an organic view).
It's fascinating how far-sighted Microsoft was; this approach, clearly very expensive, has been fundamental in making Windows the dominant O/S (for desktop computers).
It's because Microsoft understands and respects that computers and operating systems exist to let the user achieve things.
The user ultimately doesn't care if his computer is an x86 or an ARM or a RISC-V, or if it's running Windows or Mac or Linux or Android. What the user cares about is running Winamp to whip some llama's ass, or more likely opening Excel to get work done or fire up his favorite games to have fun.
Microsoft respects that, and so strives to make sure Windows is the stepping stone users can (and thusly will) use to get whatever it is they want to do done.
This is fundamentally different to MacOS, where Apple clearly dictates what users can and cannot do. This is fundamentally different to FOSS, where the goal is using FOSS and not what FOSS can be used for.
It's all simple and obvious in hindsight, but sometimes it's the easiest things that are also the hardest.
It's amazing how people don't want Linux to "Be like Windows"... but as far as I'm concerned windows is close to ideal, just with a few flaws and places where FOSS can do better...
This has very severe drawbacks, so it's not unambiguously desirable.
Windows APIs are probably a mess because of this (also ignoring the fact that only company with extremely deep pockets can afford this approach). There is at least one extreme case where Windows had to keep a bug, because a certain program relied on it, and couldn't be made to work otherwise.
Sure, from a user perspective, but not from an operative perspective: in the cases of live binary patching, Microsoft required to call the application developer to be legally clear; in orther cases, APIs behave differently based on the executable being run. There's a lot more than just keeping the API stable.
I get that my initial comment was a bit of a throwaway, but I can unpack it a bit. I think it’s a mistake to regard a working backward compatibility functionality as deficient because it requires maintenance and the cooperation of the parties involved. That’s just… engineering, right?
In a world where security flaws are so common, I'm not sure I want to run old software outside a virtual machine.
I also wish I could agree that win32 is a stable target on Linux; it may run old software, but in my experience it is often quirky. It's usually a better use of my time to just boot windows thanks to figure out how to get software to run under wine.
That core would be GNOME or KDE frameworks, coupled with the FreeDesktop standards, at least that was the plan about 20 years ago.
However as the site says doing distributions is what most folks keep doing, and naturally there isn't a single stack that can keep up with snowflake distributions.
In the end, Google took the Linux kernel, placed two core set of frameworks, one in Java, and the other in JavaScript, and naturally those are the winning Linux distributions for regular consumers.
Which is the entire point. Having working software beats having a standard. (Naturally having working software that adheres to a standard is even better, but at least they got their priorities right.)
That's great, but then don't complain to your community for creating more distros? Thats the thing they are good at, because they have been given great tools to do so.
Name one distro that ships with chrome out of the box? I dont even think ubuntu comes with a chromium browser. That means devs cant write apps against it and just hand out exes like they do on Mac and windows.
I agree. With how much resources are invested into Android for offering an OS for consumers, desktop Linux distros using the freedesktop stack are missing out. It is not easy for distros to acknowledge the sunk cost.
You are talking about Android and Chrome OS right?
I agree, those are the top two Linux distributions, and everything else is behind by an absolutely huge margin
Yep. Just browse their documentation, that is the kind of development experience GNOME and KDE were expected to provide, 20 years ago, yet fail short due to Linux distributions fragmentation, the devs using other environments, or still stuck with plain window managers and xterms workflows.
If you mean Red-Hat in regards to GNOME, they have long moved away from Desktop Linux for consumers, as there is no money to be made there.
The Slashdot posts on the matter from those days are quite easy to find.
GNOME like CDE in his day, is good enough for corporate users to connect into Linux servers.
GNOME today is also not the same as GNOME 20 years ago, it got rebooted multiple times with incompatible code, glade was dropped and now people are expected to write their GUIs in code or manual XML code (yes I am aware of the Web based ongoing replacement, what a broken idea for a native desktop), and plenty of other issues that make GNOME in 2023 even less atractive than 20 years ago.
NeXT under Steve Jobs had a maximum of 500 people working with him in .
There has been much more efforts than that in desktop linux.
Yet NeXT delivered a mostly coherent programming platform, back in the 1990s.
One thing NeXT didn't do is to introduce shitloads of useless complexity with fragmentation for the sake of it and half assed solutions on top of that.
NeXT had Steve Jobs. And I don't mean a man of his calibre, I mean a person saying yes and no to features. This is sorely lacking in the open-source world, where they have a total aversion to any kind of structure and oversight.
Cuplrits are mostly glibc devs with their manic abuse of version names (and very recently, a GENIUS who added a new ELF relocation type): this is a pain for game developers to provide binaries which span a reasonable set of distros in time. Basic game devs install one of the latest mainstream and massive distros, build there, and throw the binaries on steam... but that recent distro had a glibc 2.36 and now their binaries have version names requiring at least a glibc 2.36 (often, rarely not). They have to force the right version names with... the binutils gas symver directive (see binutils manual) until all their version names are compatible with a glibc reasonably "old" (I would go for a reasonable 5 years... 7/8 years?):
Of course normal game devs have not the slightest idea of those issues, and even so, they won't do it because it is a pain, that for 1% of their market. Not to mention they better statically link libgcc (and libstdc++ if they use c++) to avoid the ABI issues of those abominations (the word is fair) which have been plagging game binaries for TEN F.... YEARS ! (getting better has more and more game binaries default to -static-libgcc and -static-libstdc++, if c++, gcc/clang options).
There is light at the end of the tunnel though as godot engine is providing build containers which are very careful of all that (unity seems clean there too, dunno for UT5.x though).
But you have engines really not ready, for instance electron based games: you don't have the right version of the GTK+ toolkit installed on your enlightenment/Qt/raw X11/wayland/etc distro? Nah, won't run. And packaging properly a full google blink engine for binary distribution targetting a wide spectum of elf/linux distros? Yeah... good luck with that.
elf/linux is hostile to anything binary only, that due to manic ABI breakage all over the board, all the time (accute in the SDK and core libs).
If it is already that hard for game binaries, good luck with apps. I can hear already their devs saying: "we don't care, just use and install microsoft suze gnu/linux, every else is unsupported"... I think you get the picture where all that is going.
I agree. I recall a talk from Linux Torvalds on how bad the glibc folks break things and why he won't ship a binary version of his scuba tool. If the binary breakages start with the darn C lib, you're gonna have recurring problems all the way up the stack on a regular basis I feel.
If the binary breakages start with the darn C lib, you're gonna have recurring problems all the way up the stack on a regular basis I feel.
In general glibc maintains an extremely stable ABI[1]. Forwards compatibility in both glibc and libstdc++ is something many companies depend on for their mission critical applications, and it's the entire reason Red Hat pays developers to maintain those projects.
This has the case for 10 years with games on steam. The worst being libstdc++ ABI issues still around because many devs forget to statically link their c++ libs with -static-libstdc++. Because in windows, ABI stability is really good and then devs are used to that.
?? on windows you always have to ship the libc/libc++ along with your app, as part of the VS redistributables. That's pretty much the same than static linking, the symbols are just not in the same binary.
Windows standard C and C++ ABI is stable since 2017. So the last 3 releases of the Visual C Compiler and C runtime hasn't changed the ABI.
However Windows also has a lower level system API / ABI i.e. Win32 that's always stable all the way to Win 95. WinSXS sometimes helps for certain legacy apps too. This allows apps using different C libraries to work together. Win32 contains everything about the OS interface. Creating windows, drawing things, basic text, allocating pages from the OS, querying users, getting info about files is all stable. Win32 is the lower level of the C library. There is also a better separation of the functions in different DLLs. Win32 has multiple DLLs that has different jobs and they are mostly orthogonal. Kernel32 contains core almost system call stuff. Shell32 contains interactions with the OS shell i.e. creating windows, message boxes, triggering different UIs.
The surrounding libraries like DirectX / DirectPlay / DirectWrite that are also used by the games are also part of the system libs starting from 7. They are also stable.
On Linux world there is no separation of the system level libraries and normal user libraries. Glibc is just another library that one depends on. It contains the entire lower level interface to kernel, all the network stuff and POSIX user access stuff. It also contains a component that has no job in a C library: the dynamic executable loader. Unlike Windows on the Unix systems the C library is at the lowest level and Glibc being the default libc for Linux and making itself the default provider of the dynamic executable loader makes writing stable ABI almost impossible. Since other libraries depend on Glibc and executables depend on Glibc's dynamic loader everything is affected by the domino effect.
The elf loader should be extracted from the glibc... but there is price to pay, and it is where those guys could do their manic ABI breaking again.
Some very low level interfaces will have to be defined between the external elf loader and the posix/c runtime. For the moment those interfaces are private, just look at how intimate they are about threading and TLS.
Windows solves this issue by not making language runtime part of the OS libraries in a way that pollutes other libraries.
That means that you can have your application linked with library A version X, and load another library that is linked with library A version Y, and so long as certain practices are followed, everything works because you're not getting cross-contamination of symbols.
Meanwhile on Linux the defaults are quite different, and I can't load OpenGL ICD driver that depends on GLIBC_2.38 symbol on application that was loaded with glibc-2.37. Moreso, a lot of APIs will use malloc() and free() instead of language-independent allocator, unless the allocation comes from kernel. And no, you can't mix and match those.
This what the "pressure-vessel" container from collabora (used by valve on dota2/cs2), is trying to solve... but now the issue is the container itself as it does _NOT_ follow fully the elf loading rules for all elf binaries and does ignore many configuration parameters of many software packages (data files location, pertinent environment variables, etc), basically presuming "ubuntu" to be there.
Basically, I have my vulkan driver requiring fstat from 2.33 but the "pressure-vessel" (sniper version), has a glibc 2.31, then it does parse partially my global elf loading configuration and the configuration of some packages to import that driver and all its dependencies... including my glibc elf loader... ooof! That level of convolution will have a very high price all over the board.
I have arguments often with one of "pressure-vessel" devs because of some shortcuts they took which do break my distro (which is really basic and very vanilla, but not "ubuntu"). It took weeks to get the fixes in (I guess all of them are in... until the glibc devs manage to do something which will wreak havock on "pressure-vessel"). Oh... that makes me think I need to warn them about that super new and recent ELF relocation type they will have to parse and detect in host drivers...
On my side, I am investigating some excrutiatingly simple modern file format which should help a lot fighting those issues, there will be technical trade-offs obviously (but would run-ish on "old" kernels anyway with elf "capsules" and an orthogonal runtime). Like json is for xml, but for elf/pe.
It seems only the linux ABI can be trusted... til Linus T. is able to hold the line.
The ABI is not stable. Google has to do extra work monitoring for ABI breakages to make sure that pushing out an update of an LTS branch of the kernel does not break people's drivers.
The module ABI is not stable, but that's not the ABI typically referred to when someone says "Linux ABI". That would be the userland ABI, which is stable.
The OS provided memory allocators - HeapAlloc, its wrappers GlobalAlloc and LocalAlloc (remainders of 16bit era), VirtualAlloc (page granularity, can be compared to MAP_ANONYMOUS with mmap(), and CoTaskMemAlloc which can be shared across COM processes + their respective "free" functions. malloc() is explicitly called out in documentation as "runtime dependant".
Similarly other OSes used to have memory allocation services that weren't linked in any way to a language runtime - VMS has several calls, all language independent, ranging from low-level equivalents of mmap() to malloc-alternative.
And yes, a big chunk of Windows portability is that various APIs use those language-independent calls internally, and so can developer in order to avoid creating issues - and documentation promotes those language independent methods.
At no point you get into situation with Windows API that you call free() on memory allocated by malloc() from a different libc.
In comparison, the available language-independent APIs without using any special libraries on Linux, are to directly call mmap() and sbrk() through inline assembly (beware the glibc wrappers!).
HeapAlloc and VirtualAlloc do not bring the whole language runtime with them, like glibc does, nor are they specified as part of a specific language runtime only (the unix primitive for allocating memory is sbrk() and mmap(), not malloc()).
And with how applications are linked by default on linux, whoever loads glibc first sets the stage for every other module in the program, which makes for "fun" errors when you get a library that happens to be compiled with newer version (or older). And even if you force windows-style linking (good luck, lots of infrastructure needed), you end up dealing with modules passing each other malloc()ed blocks and possibly trying to call free() from different allocator on it.
Anyway, for all practical purposes, glibc has no stable ABI at all.
I suppose they meant a "libc independent" allocator, e.g. jemalloc/tcmalloc/mimalloc/etc. Although, using such allocators comes with complications of their own: https://lwn.net/Articles/761502/
I think there are very few c++ ABI versions on windows and it is easy to select the one you want until it is installed, they can be there side by side.
If they're going to put the game on Steam then they should be using the Steam Runtime which is available here[1]. Otherwise they're shooting themselves in the foot.
> If it is already that hard for game binaries, good luck with apps.
Video games are some of the most complex and most difficult pieces of software both to build and to get right. Modern games are coded so poorly that driver makers have to release patches for specific games.
It's the gamedev world that can't get its shit together. Valve settled on an Arch-based distro for the Deck. The specific distro doesn't matter since Steam already establishes its system requirements and comes with a runtime.
Beyond that, I really don't see the issues you're talking about. Generally, any issues you have are fixed by symlinking the correct .so files. This is a side effect of targeting a specific version of software instead of keeping to its more base features. That's on the dev, not the user or distro.
You act like Windows has never had ABI breakage or versioning problems. I'd like to see the specific issues you ran into; maybe there's an easy fix like a symlink.
> Video games are some of the most complex and most difficult pieces of software both to build and to get right.
Actually as far as binary distribution goes, video games are some of the easier to build software as they tend to need limited operating system integration besides creating a (possibly fullscreen) window, getting user input, driving a graphics card to display the result and output audio somewhere. Only completely headless programs like command-line tools or servers have it easier. Contrary to the popular meme, creating Linux binaries for games which will run on all (current) distros and will keep running in the future is not rocket science. Not completely trivial, but something that any competent developer should be able to manage.
Where things get really hard is desktop applications which users will expect to have a much tighter integration with the rest of the environment. Neither Qt nor GTK have a long-term stable ABI and shipping them with your program just means you have a million more unstable ABIs you depend on.
> But you have engines really not ready, for instance electron based games: you don't have the right version of the GTK+ toolkit installed on your enlightenment/Qt/raw X11/wayland/etc distro? Nah, won't run.
Hm, not sure what you mean here.
At least with the nw.js toolkit (from which Electron was forked IIRC) I've never gotten a report of a distro where it would refuse to run because of an incompatible GTK version.
More reasonably, binary GUI apps should not expect more than the window system, on elf/linux, wayland with a legacy fallback to x11. It means, the GFX toolkit is an application choice on top of the windowing system. Binary GUI apps have to distribute it... they have to distribute their own version which would not conflict with the version installed on the user system... if any...
> Cuplrits are mostly glibc devs with their manic abuse of version names (and very recently, a GENIUS who added a new ELF relocation type): this is a pain for game developers to provide binaries which span a reasonable set of distros in time. Basic game devs install one of the latest mainstream and massive distros, build there, and throw the binaries on steam... but that recent distro had a glibc 2.36 and now their binaries have version names requiring at least a glibc 2.36 (often, rarely not). They have to force the right version names with... the binutils gas symver directive (see binutils manual) until all their version names are compatible with a glibc reasonably "old" (I would go for a reasonable 5 years... 7/8 years?):
Or ... just build against the oldest glibc they want to support.
Distributing source code is not an obstacle to selling games as for all but the most trivial games the meat is in the assets and scripts. You can open-source game (engines) while continuing to sell the rest - which is what e.g. allows you to run Doom on any device imaginable.
If I could have that spread of frameworks that macOS offers on Linux, I’d be targeting Linux with my side projects yesterday. Having such a wide selection of tools to readily reach for with zero consideration about how long it’ll be supported, which fork is best, how well it meshes with a pile of other third party libraries, etc is amazing. It reduces friction massively and you just build stuff.
The KDE Qt ecosystem and its GNOME/GTK analogue are closest but still aren’t quite there.
> If I could have that spread of frameworks that macOS offers on Linux, I’d be targeting Linux with my side projects yesterday.
I doubt it. More likely, you would be complaining that you need to spend effort rewring your code to use Linux frameworks for a tiny userbase.
That's why most cross-platform software ends up using OS frameworks only where neccessary and ships its own version for most stuff. Coincidentally, that approach matches which stable ABIs are available on Linux.
Yep, have been aware of it for a long time. It’s great in concept but as far as I’m aware a good deal behind current macOS — last I knew it was compatible with OS X 10.6 (released 2009) with no support for Swift or for any of the advancements in Objective-C made since then.
I’m also not sure how well GNUStep apps would fit into a modern GTK or Qt-based desktop, e.g. if they’d theme controls to match.
Stuff like Flatpak and Snap exists because the framework side kinda is built-out. Isolation technology had matured, newer desktops emphasized per-window security and needed APIs to portal in and out of each instance. The desktops needed a packaging/infrastructure solution to tie that together and make it presentable to the user.
Yet I still can't compile an app on some arbitrary release of some arbitrary distro and just run the darn exe on another and be 100% sure it will work.
On other platforms people don't even try to support anything but one "distro".
You could make an AppImage or snap that would work across pretty much any mainstream non-hobbist-oriented distro, and Snap/AppImages is pretty much the Linux equivalent of EXE.
Raspberry Pi OS just moved to NetworkManager and they have PipeWire, which was the last reason I had to deal with less common software stacks, so it seems like stuff is getting more standardized.
AppImages are just a fancy self-mounting disk image. If you do it right and include all required contents then it will work ~anywhere but you can just as well do that in a .tar.{gz,xz,zstd}/whatever archive.
I installed VSCode as a snap on my Fedora install and it worked fine as far as I could tell. This was in Mar 2021, and I chose the snap because code.visualstudio.com/docs/setup/linux said that "Due to the manual signing
process and the system we use to publish, the yum repo may lag behind and not
get the latest version of VS Code immediately. . . . Updates are automatic and
run in the background for the Snap package."
(I don't have an opinion on Snap as compared to Flatpak or AppImage or none of the 3.)
+1 AppImages. And the beauty is you can make them portable by creating a folder with the same name as the AppImage and append .home to the end, and boom: portable software. For example
image-editor.AppImage
image-editor.AppImage.home (folder, all settings are stored there)
On a large enough timescale I don't think you can reasonably expect this on any of this big 3 OSes. From a less macro perspective, I think tools like Appimage and Flatpak will fill that role.
On a long enough time scale we are all dead, and Linux doesn't exist. That doesn't mean that in the decades that computer software has to be useful to actual people that things have to suck this badly right now.
The "actual people" using Linux on the desktop are not running .so files off the internet. I get what you're saying, but you know it's facetious to pretend that packaging is simple on Mac and Windows too.
Packaging is fairly simple on Mac and Windows if you use the dev tools for the platform and are not doing things like installing services or drivers or changing OS configuration.
Your packaged app will almost always work too. There are not 50 distributions of these OSes.
The trouble is you're trying to distribute it as a binary yourself. The there are two traditional ways for distributing software for Linux:
1) The system package manager. It will download a binary from the repository which is the right one for that system.
2) make && make install. This is mostly for software in development that hasn't made it into the package manager yet. It will compile from source and produce a binary for the target system.
All the problems are from people trying to do something other than this.
That's the problem. The Linux way of distributing apps is wrong for trying to compete with the other consumer platforms. The ChromeOS or electron style of doing things is another story though.
If you have a stable widely used application suitable for being installed by unsophisticated consumers, have the distributions put it in their package managers. This is hardly any different than consumer mobile platforms that require you to use an app store, except that it isn't actually required, just the thing you ought to do absent a good reason to do otherwise.
If you're distributing something sufficiently esoteric or experimental that the package managers won't touch it, your correspondingly sophisticated or adventurous users can compile it from source.
We don't need some malware-facilitating norm that encourages users to install opaque binaries from random websites.
Each distribution is a different operating system. You cannot package the "same" app for Windows and mac, so why should you be able to package the same application for Debian and Arch, which, even though they run a lot of the same code, have different underlying layers and assumptions?
And on a lower level, `fgetwc()` gets crashed deliberately in `libc` when applied to a stream created with `fopencookie()`. It should be incredible, but Linux `libc` is not Unicode-capable in 2023.
Ensuring an app looks and feels the same across various distributions seems quite challenging when it’s not only different flavours of the OS but also different desktop environments.
At the same time, the OS flavours don’t seem to offer a unified way for handling payments, subscriptions and in-app purchases which is a significant burden to implement from scratch by every app developer.
Because a Linux distributions are used all over the world. Stripe and PayPal are good, but insufficient as a choice - one can't be expected to pick SDKs that work equally well in all regions, preferring local payment options, registering with local tax authorities etc.
Also, Stripe and PayPal don't offer tools to check if an app is running with a valid license/subscription, it's not pirated etc. (the equivalent of the App Store receipt signature).
The "feel" of the app is identical across distributions because the app controls everything inside its window and the look only varies slightly in terms of colorscheme and window decoration if that.
There is absolutely no legit purpose for in app purchases on a desktop OS. Its not hard to pay for substantial software suites and most of what would be an app on platforms which at one point would have had an anemic browser experience is simply a website on platforms where fast cpus and 14-28" screens are normal.
Nobody needs a bunch of adware apps you can pay $3 to decrapify or games that are a slog if you don't buy fake potions for real money.
There are enough good free basic apps for virtually any use case and the more complex use cases need up front investment and users not in app purchases.
You can't stop people from making crap apps, they exist even today.
> The "feel" of the app is identical across distributions because the app controls everything inside its window
To make an app feel at home, it needs to "blend" with the rest of the OS. To achieve this, one can use OS-components to build an interface. Just a simple example, GTK apps are so different from KDE apps in their look and feel - you can always tell if an app is native GNOME or KDE app. Now consider all the other desktop environments - it's just way too many to account for in one's code and testing.
Then comes the question of system integrations - how do you offer a unified photo picker experience, how do you ensure you always ask for the right permission to access the camera, the clipboard or network APIs - all these should be provided from the OS so they don't confuse the user and prevent abuse by naughty apps.
> no legit purpose for in app purchases on a desktop OS
I strongly disagree - what if you want to offer a "try before you buy", or you'd allow users to purchase additional content/credits for a service bundled with your app? What if you want to adapt your pricing for a specific event or holiday period...etc, the sheer scope of possibilities is not practical to include in one comment.
> how do you offer a unified photo picker experience, how do you ensure you always ask for the right permission to access the camera, the clipboard or network APIs
TLDR: Release GTK/QT apps with menubars and xdg-desktop-portal on flathub and use stripe to impliment any and all desirable business models. You can do this today and your app wont look any more out of place than the browsers and office suites. IAP are the STDs of business models.
> You can't stop people from making crap apps, they exist even today.
The particular crapware that exists on Android is notably absent from built in software management GUIs so it looks like you CAN do this.
> To make an app feel at home, it needs to "blend" with the rest of the OS
68% of desktops are GTK based, 26% are KDE which themes GTK apps the same as KDE apps. Superficially apps blend well. Looking deeper you will notice many similarities. Many common shortcuts and the same, common UI paradigms, idioms. Look a little closer and you'll note differences especially gnome with its client side decorations which cram a toolbar + window controls into a single line with extra spacing, QT apps with the more traditional menubars and certain shortcuts in common. Then there are incredibly common apps that are obviously not fully consistent with any, firefox, chrome, gimp, libreoffice, thunderbird.
What one ought to realize shortly is that there is no singular Linux desktop to blend into and it works fine as is.
If you make a GTK app with a traditional menubar you will look reasonably at home on virtually all desktops. At least as at home as most of the most popular apps listed above.
> Then comes the question of system integrations - how do you offer a unified photo picker experience
xdg-desktop-portal
>how do you ensure you always ask for the right permission to access the camera, the clipboard or network APIs
In native apps none of this is controlled at all. Don't install things you think might look at you through the camera and upload your nudes to the cloud. In flatpak the permission to do so is front loaded into the permissions required by the app. Don't install things that require camera and network permission if you think they might upload your nudes to the cloud. Flatseal provides a way to modify this permission after the fact if you want to install something and modify what permissions it gets.
> I strongly disagree - what if you want to offer a "try before you buy",
The most obious thing to do is time or feature limit your usage and "unlock" your app by opening a url and handling payment on your website with stripe and then having the user copy a code and or open a link to communicate it to the app. This is relatively easy AND lets you keep 100% of the money and no capricious app store rejection can keep your users from using your app. If you for any reason were ever to have a challenge distributing your work on the official source both flatpak and traditional package management has the concept of multiple sources. Customer adds your source and your apps and official apps are displayed in the same integrated app store interface.
Flathub is supposed to introduce one off paid apps/subscriptions. I'm not clear what the progress on that feature is. I do not believe there is any plans for in app purchases however. Probably because 99.9% of the use case is dominated by porn, shitty games, and adware. Asking why non-gross environments don't impliment IAP is like going to the whore house and shouting "where all the gonorrhea at!"
It is better from the user perspective if payments/subscriptions are either managed on the website for the service or better yet one a singular store interface where customers can make an intelligent decision rather than having the dev low ball them and them ride the sunk cost fallacy and FOMO to a fuckin payday.
> On competing platforms there are way more frameworks out of the box (CoreImage, CoreAudio, CodeML, SceneKit, AppKit, etc) and they don’t break as often.
I would not use MacOS as an example of stability, each version breaks and deprecates a massive amount of API and old apps will require changes.
Windows is the only OS with serious backward compatibility.
People like to shit on tools like Electron, but there's a reason they're popular. If you need to reach a broad audience with a native tool, using heavy-handed web-based plumbing is a bigger win for Linux users than supporting only windows and macos where like 97% of desktop users are.
Hold on mate, isn't that what Java was supposed to solve. I remember before the days of electron when I was a wee lad in the 2000s, all cross platforms apps were Java.
Look at Ghidra, it's a Java app for Windows, Linux and Mac. The "holy trinity" of operating systems, covered with one language and framework.
So what happened? Did devs forgot Java exists and felt like reinventing the wheel but worse this time?
Java simply has a much higher barrier of entry. Not only in regards to figuring out the language and resources available but also the fact that creating a GUI still requires external dependencies.
Electron isn't just cross platform, it is cross platform based on technologies (html, css and javascript) that also by a huge margin have the largest amount of developers available.
> Not only in regards to figuring out the language and resources available but also the fact that creating a GUI still requires external dependencies.
What external dependencies does Java need that's not in the JDK itself? I have an app with Mac and Windows installers (and thus bundles JDKs), it also runs on Linux (via a fat jar), I tested it on Ubuntu, but for the life of me I couldn't figure out how to package it properly. It was more complicated that I cared to invest it at the time.
As for the barrier to entry, I feel the same way about the web. I find the JS and Web ecosystem to be utterly overwhelming, one reason I stuck with Java over something like Electron, and the installers/footprint of the app are smaller to boot.
For Linux, I'm using jpackage to package my Java software to .deb (x64 architecture) file. For all the other Linux variants, I've a .tgz file that contains the jar file, libraries and icons of the applications.
The problem I have with Linux is named at the end of the website: "Sharing your creation". It's pages and pages of documentation that is not relevant to the packaging of your application where you can spend hours of work without finding what you want of finding out that it doesn't work for you because for example it's not on GitHub.
Hopefully jpackage was able to fix it for the .deb format.
Instead of working on more documentation, working on better and easier to use packaging tool would help.
The JRE itself is an external dependency that you need to bunle because it is not part of most Linux distributions. And even if there is a JRE installed it is not guaranteed to be able to run your Java application.
> What external dependencies does Java need that's not in the JDK itself?
I mean that it doesn't come with Java itself, but you as a developer need to pick a UI framework and not all of them actually work all that well cross platform or will get you an actual modern interface.
Edit: I should also note that the threshold for entry I am talking about is for people just generally starting out. There simply are way more resources available for web related development than there are for java.
Also, when you start bundling your JDKs I am not sure you can talk about a smaller footprint anymore.
Well, Swing is still bundled with Java. Netbeans uses the "Flat Look and Feel" and looks ok to me. I find Swing a lot more work compared to FX.
JavaFX used to be bundled with Java, but was removed. Some JDK distributions bundle FX like it was before, and adding FX to a new project is simple and straight forward. Maven packages it nicely, and it includes the platform specific binary parts. If you can use Log4j, you can use Java FX. Onboarding onto FX is not a high bar.
I can not speak to SWT.
There's several examples of "modern" UIs in FX, I can't speak to any of them, I don't pay much attention to that space. It's imperfect compared to the web, but not impossible.
It was. Even before it was more "bundled" with the JDK than "part of Java".
But, to be honest, that's a real nit. It's a standalone dependency, it's 4 lines in a POM file, it doesn't drag the internet with it, and it only relies on the JDK. So, while it's a large subsystem, it's a "low impact" dependency in terms of side affects and complexity.
> it's a "low impact" dependency in terms of side affects and complexity.
I wish that were true in my experience. But we have struggled to support {macOS, Windows, Linux} x {x86_64, arm64} with JavaFX and one .jar for our application.
My point about 4 line dependency is to point out that the barrier to entry into FX is low. What you are doing I would consider unconventional, as demonstrated by all of the hoops you're jumping through to achieve it. Packaging, yes, is still a bit arcane at this point.
My project, https://github.com/willhartung/planet packages macOS and Windows installers, and can be run as a fat jar on a Linux machine (tested on Ubuntu). You can look in there to see my POM file, and my build scripts. They're much simpler than what you're doing. I don't have a package for Linux, as I mentioned earlier, it was just a bit to confusing to figure out Linux packaging for my tastes, so I punted. If there was crushing demand for it, I'd look into it deeper.
None of those artifacts are "cross platform". It's not a single artifact for all platforms, they are platform specific. I build the Mac one on my machine, and the Windows and Linux versions on VMs. Currently, the vision for Java distribution is to bundle the runtime with the application. Use jlink and the module system to narrow down your JRE, and jpackage to combine them into an appropriate, platform artifact. jpackage requires to be run on the host OSs. I do not have ARM versions of any of my code yet.
If you want to ship a cross platform jar, then it's probably worth your time to require a JDK with FX already installed. Azul does this, I think there are others. Then the FX, and it's platform specific binaries, are no longer your applications problem.
Also, there is a project, https://jdeploy.com that offers tooling and infrastructure to distribute native FX bundles, it even offers automatic updates. It will install its own JDK in its own directory structure to run your applications. If you have multiple applications, it will share the JDKs among them. It's quite clever, and perhaps worth considering depending on your requirements. I chose to not do that just to make my projects as simple as practical for the end user and myself.
I'll be fair, getting to this point was not drag and drop. jpackage and jlink can be fiddly to get started with. Documentation can always be better.
> What you are doing I would consider unconventional
It wasn't before JavaFX was removed from the Oracle JRE. That is my point. JavaFX used to be a trivial dependency, but now it is quite painful in otherwise identical configurations, definitely not "low-impact."
> If you want to ship a cross platform jar
We do. Isn't that the point of Java, "write once run anywhere"?
This program is also used as a library in autograders. We do not want to distribute 5 versions of each autograder for 2-4 assignments. The autograder should be distributed as 1 jar. Undergrad TAs are creating that jar and may not have knowledge of complex CI pipelines etc.
> then it's probably worth your time to require a JDK with FX already installed.
That is not appropriate here. This is an educational tool, and students are enrolled in other courses that use Java frequently. We should be able to use the same JRE that students already have installed — it is unreasonable to require installing a different third-party JRE to run a digital logic simulator. It also adds another hurdle for freshmen/sophomores who may not have a natural ability for juggling different JRE installations. (Source: We tried requiring Azul and it was painful for everyone.)
> I do not have ARM versions of any of my code yet.
We have >900 students in this class, so it is necessary to support M1/M2; in fact, a large portion of our students had M1/M2 laptops. It sounds to me like you could just provide a fat jar in your case, actually. Supporting aarch64 is where we hit problems with our fat jar[1], since the aarch64 native libraries have the same name as the x86_64 libraries.
To summarize my point: yes you can make the build/install process more convoluted and avoid this problem. But we have an installation flow that has been battle-tested by thousands of students for 13 years (download circuit simulator .jar and run it) we have no good reason to abandon. The combination of the arrival of M1/M2 and JavaFX getting yanked from the JRE has made supporting our existing (extremely reasonable) flow nothing close to "low-impact."
Makes sense. I worked a bit with Java years ago, but never with GUI stuff. Most of what I remember about it was drowning in boilerplate and being really good for coordinating a lot of developers around a big well-organized codebase. I probably couldn't write hello world from scratch without reference if I was being held at gunpoint.
> If you need to create a "just works without dependency b.s." experience in Java, you use the correct tooling for that, jlink.
At which point you are including a similar footprint as Electron does by shipping chrome. I mean, you must have realized I was talking about the inclusion of the JRE and whatever else is needed to make a java application run on a system as a standalone application.
So I am honestly not sure what you are arguing besides semantics.
when i see a java application i think, hmm, this is likely going to be bloated (but not necessarily) but for sure it's going to run.
if i want to create a cross platform application where i don't even have to think about testing on multiple operating systems, then java is going to be a serious contender.
and if i have to choose between an app written in java or electron, i'd probably pick the one in java.
so yeah, i don't understand what happened here either.
Java is great for making huge well-organized codebases with a lot of developers, especially if you've got good tooling support or a rich ecosystem of existing code to work with. Outside of that... If it was a good development ecosystem for native gui-based apps targeted at end users, why wouldn't the preponderance of native user-facing apps be written in Java, anyway? Ask nearly any experienced mobile app developer if they're more productive in Java on Android or Swift on iOS-- it's not even close. Sure, some of that is the OS itself, but a whole lot of it isn't. On the desktop, the one time I tried to make something with Swing I wanted to Fling my computer out the window. Clunky.
It’s about branding. Swing and JavaFX looks like other desktop app (aka not cool to a lot of designers). And it has a high barrier of entry (ever tried QT, AppKit or Win32). Electron is easy, but it’s shoehorning a document process to software interfaces.
Yeah the architecture for electron is absurd, but it's important to not relegate UI flexibility to mere aesthetics. For most of my career, I was a back-end web developer, but more recently I've done a lot of interface design after getting some formal education in it. The overwhelming majority of developers I've worked and interacted with conflate aesthetic and interface usability. Heck, even I did before I really started digging into it professionally. I think it's because applications that have experienced designers make a good, usable interfaces will also likely hire visual designers to do aesthetic/branding work, and especially in waterfall environments, developers get it all handed to them as a "design." And for many reasons I will (uncharacteristically) not rehash here, FOSS lacks both.
However, a good interface and a pretty interface are not the same thing-- both are communication mediums that communicate through the software interface, but visual/branding/identity designers communicate things to the user about the brand as a marketing device, and interface designers figure out how to communicate the software's features, status, output, etc. with the greatest efficiency and reduce unnecessary cognitive overhead. Branding and identity is a very specialized form of design that's usually done by design houses-- even huge companies with big teams of designers often contract this work out to specialists. They might go so far as to recommend certain animations for interaction, but you don't want them designing your interface. In small companies, the designer will probably have to implement their design to conform to the design document, but they're using tools like gestalt, alignment, color grouping and type to create information hierarchies, existing expectations for layout and functionality, etc. that tell the user what they need to know as effectively as possible, and how to act on that in the ways they need to.
A good example of the power of interface design is in many dark patterns. You can simply have a plain system-standard dialog box asking if a user consents to some creepy analytics that nobody really wants, but instead of "OK" and "Cancel" in their normal spots, put "Off" in bold letters where "Ok" would normally be, and "Consent" in non-bold letters where "Cancel" would normally be, and I'll bet you at least 60% of users would choose "Consent" having only skimmed the familiar pattern. That experience isn't branded or styled in any way-- it solely uses juxtaposition, pattern expectations, and context to influence users behavior.
When you've got an inflexible, counterintuitive UI kit that developers must fight with to get the results the interface designer carefully put together, you hurt the usability of that tool for end users a hell of a lot more than mediocre performance does. This is very counterintuitive for most developers because of the curse of expertise. We have a working mental model of how software works on the back end and consider the interface a tool to expose that functionality to the user. To users, the interface is the software, and if you're design is more informed by the way the software works under the hood than the way a nontechnical user thinks about solving the problem they're trying to solve, it's going to be very frustrating for everyone who isn't a developer. Developers like to talk about marketing as the primary reason commercial software is king, and it's definitely a factor, but developers aren't magically immune to marketing, and you can't get much more compelling than "Free." To most users, the frustration of dealing with interfaces designed by and (inadvertently) for developers is worse than paying for software--- hence FOSS nearly exclusively being adopted by technical people.
There are many factors influencing adoption, including prior experience (I'm using it at work) and network effects (that's what my friend use). What native controls offer is seeing the whole OS as one thing. But with the advent of branding in software interface, people are expected to relearn what a control is for each software (Spotify vs Music).
> When you've got an inflexible, counterintuitive UI kit that developers must fight with to get the results the interface designer carefully put together, you hurt the usability of that tool for end users a hell of a lot more than mediocre performance does.
I have not encountered a UI kit that does not expose the 2D context to create a custom UI. But designers always want to redo native controls instead of properly using them, creating only the necessary ones. I don't believe anyone can argue that Slack UI can't be better.
Common practices do not equate to widespread approval. Most developers don't like electron-- even the ones that build with them much of the time-- but they're everywhere. Designers' opinions are no more generalizable than developers opinions.
As someone who's studied and professionally practiced interface design, I can assure you that there's nothing magical about system UI elements to the vast majority of users. Developers often focus on that because managing them is such an important part of developing interfaces, and developing with them is way easier... but in design, it's a small slice of the components that make a real difference. A lot about usability, as is the case with any other communication medium, is extremely nuanced, and native UI kits suck for creating that nuance. It's usually possible, but once again, especially now that HTML/CSS/JS isn't the accessibility catastrophe that it used to be, the extra effort to get polished results using native stuff just doesn't pay off.
As a long time developer before I became a designer, I am intimately familiar with the sort of blind spots and misconceptions developers have about interface design. Having a working mental model of software in your head significantly shifts the way someone works with computers. Developers see interfaces as a way to expose application state, data and functionality to end users, but to nontechnical end users, the interface is the application. That is not a trivial distinction. Many things most end users prefer chafe most developers. Most things that developers prefer are absolutely unusable to most non-technical end users. And most importantly, most developers assume that their technical understanding makes them better at knowing how interfaces should be designed, when in my significant experience, it makes us worse at it. The curse of expertise obviously stymies in documentation and education-- they're the two most obvious communication mediums in software. Most developers don't even consider that the interface is the most visible and consequential communication medium in any GUI application, and going based on your gut instinct about what that should be works as well as going based on your gut instinct about making a tutorial for nontechnical users. It doesn't.
I'm not saying that native controls are better because they are native. Or electron is suffering from some defects that impair usability. With equal time and effort, a software built with native controls will be more usable. A random user will not be able to distinguish which is which, but I dare say that the native ones would felt better if the only difference is what is used to build the interface.
When designing native controls and using common patterns of the OS, you lessen considerably the amount of efforts required to learn that application for the user of the platform. Most non-technical users only use one platform. Creating the same interface for two or more platforms is impairing users on that platform. And I include the web as a platform.
The JRE itself is an external dependency that you need to bunle because it is not part of most Linux distributions. And even if there is a JRE installed it is not guaranteed to be able to run your Java application.
So yeah if you redefine your problem to "run on systems with the right JRE" then Java makes things "easy" (your program will still stick out like an unpolished turd). But if you can just require stuff like that than you can also require the right dependency versions for native programs.
Java is objectively terrible for writing good apps on modern personal computers. The one platform that did adopt it (android) had to practically rework the entire byte code and VM as well as the set of APIs for writing apps to make it work.
Well, so I can only tell you as much as I know and understand. Some of this pulls in some outdated information too.
So, JVMs and languages that abstract the underlying machine are always going to have overhead. The original interpreted stack-based JVM model is really bad for performance because you can't do great optimizations on the code because you can't have a great view of the operands that are being defined and then subsequently used, on top of that you have to either JIT or interpret code which also has overhead. This is why Android's original Dalvik VM originally started by converting the Sun byte code format to a register based format. So, now you have a format you can do some optimizations on: great. But you still depend on a VM to generate and optimize for native code: that means code-caches and that means using excess memory to store the fast optimized code you want to run (which could have been evicted, so more overhead when you have to regenerate). Next you have frameworks like the classic Swing in Java that were frankly implemented with priorities that did not include having a really great and responsive experience even though its platform agnostic as far as the way it draws widgets. These days we can take GPUs for granted to make this approach work, but a lot of the Java UI stuff came from another era.
I am not really sure if I am right here, but to me all this means that to have made the Java system work well for modern PCs and mobile it would have required a ton of investment. As it turns out, a lot of that investment went into the web and android instead of polishing Sun and Oracle's uh... product.
Java's also kinda been sidelined because for years Oracle threatened to sue anyone that dared fork it as Google had, and Microsoft kinda spent a decade making C# and .NET more confusing than it already was so theres that too.
I think it's hard to beat the tide that is the web as a content and app delivery system. The web is also getting all the billions in investment from every massive faang.
> So, JVMs and languages that abstract the underlying machine are always going to have overhead.
Well, so JavaScript and WebAssebly isn't that great either in the end?
> The original interpreted stack-based JVM model is really bad for performance because you can't do great optimizations on the code because you can't have a great view of the operands that are being defined and then subsequently used, on top of that you have to either JIT or interpret code which also has overhead.
What a paragraph. But it's kinda false.
WebAssembly, you know, is also a stack-based virtual machine.
Javascript might not be a stack-based virtual machine, but you're interpreting it every time you run it for the first time. How is that faster that bytecode? It isn't.
In fact, modern Javascript is fast specifically because it copies the same workflow of the Java HotSpot JIT optimizer - detect and compile code hot spots in native code, run that instead of VM code.
> This is why Android's original Dalvik VM originally started by converting the Sun byte code format to a register based format. So, now you have a format you can do some optimizations on: great. But you still depend on a VM to generate and optimize for native code: that means code-caches and that means using excess memory to store the fast optimized code you want to run (which could have been evicted, so more overhead when you have to regenerate).
Nope, that is totally not the reason. Dalvik was done because it was believed that you needed something that starts faster, not something that runs faster.
Those are 2 different optimization targets.
It was pretty known since the start of Dalvik that Dalvik had very poor throughput performance, from 10x to 2x worse that HotSpot.
The reason why we don't have Dalvik anymore on Android is that it also didn't start that much faster either.
That of course is not because register machines are worse either, but because nowhere near enough optimization work was done for register type VMs compared to stack type VMs in general.
> Next you have frameworks like the classic Swing in Java that were frankly implemented with priorities that did not include having a really great and responsive experience even though its platform agnostic as far as the way it draws widgets. These days we can take GPUs for granted to make this approach work, but a lot of the Java UI stuff came from another era.
Ok, but does your favorite, non-web GUI framework use the GPU, and use the GPU correctly at all?
Even on the web it's easy to "accidentally" put some extremely expensive CSS transformations and animations and waste a whole bunch of GPU power on little things.
> I am not really sure if I am right here, but to me all this means that to have made the Java system work well for modern PCs and mobile it would have required a ton of investment. As it turns out, a lot of that investment went into the web and android instead of polishing Sun and Oracle's uh... product.
You're mixing things here. "Sun products" were very expensive UNIX workstations and servers. Not things for your average Joe. Those very expensive Sun workstations and servers ran Java fine.
Java itself is a is very weird "Commoditize Your Complement" ( https://gwern.net/complement ) attempt to commoditize this exact very expensive hardware that Sun was selling.
From Sun. Marketed at very high expense by Sun. A self-inflicted self-own. No wonder Sun no longer exists.
> Java's also kinda been sidelined because for years Oracle threatened to sue anyone that dared fork it as Google had, and Microsoft kinda spent a decade making C# and .NET more confusing than it already was so theres that too.
C# not having nice GUI is another story, that of Windows-land never having anything above pure Graphics Device Interface being stable since forever.
You're living in the past. Applets and Flash lost against the HTML/JS/CSS stack and Oracle owned up to it. Applets are terminally deprecated now.
Edit: admittedly, one of the reasons for that was that the sandbox was indeed prone to security holes. Also, the developer ergonomy of the SecurityManager was unsatisfying for both JDK and app developers. Good riddance.
Golang's only consistent advantage over Java is lower latency on compilation, startup, and GC. OpenJDK will eventually level the playing field with Project Valhalla. In terms of FFI and language features Java has already caught up. And faster startup can be achieved with CRaC.
The crucial difference is that these technologies are embedded differently. Java Applets had access to dangerous APIs that had to be restricted by the SecurityManager. Also, the JVM was installed externally to the browser, turning it into an uncontrollable component which made the browser vulnerable in turn.
The newer technologies were designed from the beginning with a well-defined security boundary and are based on a language that was designed from the beginning to be embedded. Everything is implemented within the browser and can be updated together with it.
As someone who uses Linux as a daily driver, I can recognize these gargantuan apps a mile away and stay away from them. They are absolute hogs of system resources, and for something simple like Etcher there's no excuse.
Things like Electron are good for devs but bad for users. We have more computation power than ever and yet programs still run slow.
Oh, it gets better. Even the default Weather app shipping with Windows 11 is also an Electron pile of trash that uses ~520 MB of RAM. Just let that sink in. 500MB of RAM just to show you the weather forecast for the day and week. That was my entire system RAM of my Windows XP gaming rig.
Same for the Widgets app, it's not only bad because it shows you news and ads when you open it, it's worse because it's also, you guessed it, an Electron app.
Some VP in Redmond must be off their meds.
I assume Microsoft just can't find devs to write C#, their own damn programing language for their own OS, and one of the dozens of frameworks they have for Windows GUI, that they need to resort to using Electron for what are just Windows-only apps.
The Weather app in Windows 11 a UWP .NET Native wrapper around WebView2 controls. It's exceptionally silly that it's basically just a web browser with predefined tabs and that it uses so much RAM, but it's not Electron.
Good lord, that's crazy, haha. You'd think with all of their different frameworks, one would have been more suitable than starting from scratch with a browser tab, jeez.
I recently upgraded to 10 because of Steam requiring it in a few weeks, and it's been an adventure. Lots of crashes and restarts that I didn't ask for. I really don't know who exactly modern Windows is for, because I'm a gamer and programmer and it's not been good for either of those tasks...
Windows 7 was solid and I almost never had issues out of it. It booted and got out of the way.
Worse for users than nothing? IT shouldn't be a default, but if it's that or nothing-- as it often is when it comes down to limited resources-- I think it's better than nothing. If you're looking to make a useful tool for a broad audience that must run locally, you have to support windows because that's where 80% of the users are. You should support OSX because that's where 15% of the users are. That's two codebases with dramatically diminishing returns. You need a damn good reason to justify adding ANOTHER codebase on there to scoop up the remaining handful of users on Linux.
Also, aside from startup time, I don't have any trouble with electron apps running slow on my machines. I think many developers are conceptually annoyed with the absurd, bloated architectural underpinnings rather than the experience that translates into when using them. Perception means a lot when judging performance, and I'll bet with most end users using, say, slack, the speed of their internet connection affects the speed of their work more than the speed of the application.
I'm really not swayed by "we must use this turd because the alternative is nothing". "Nothing", to me, is a technical challenge and a sign I should probably start writing the thing myself.
Yes, not everyone has the skill or time to do that, but it's also no reason to accept half-baked solutions that don't take the user's system resources into account. Compute may be cheap but it's still a resource we need to use wisely. Not everyone is running a system like the developer's Macbook Pros on 5GHz wifi hooked up to fiber.
If you only care about the best technical solution, and don't care about economics, then you almost, by definition, only care about what existing FOSS users are doing, and I don't find that scope limitation useful in any way. I love FOSS. I've been a regular contributor to FOSS for decades. But the impact user-facing FOSS apps have on the overwhelming majority of users is miniscule, as is the comparative number of regular FOSS users. Server apps? Apps developers use to make the apps everyone else uses? Absolutely. A music player? A chat app? Nope. Software that makes a visible impression on users is commercial. That's just reality. And companies that don't consider ROI on the products they create aren't companies very long.
The most popular as-is FOSS app for users is probably Firefox with a browser market share neck-and-neck with Opera and Samsung Internet, and everything less popular might as well not exist among probably 99% of users. Why? It's certainly not performance, I assure you. It's because it's poorly designed and users find it infuriating to use. Sure, you can find people complaining about their bloated slack client being slow on their machine. You think that's bad, find a professional photographer and ask them about the one time they tried to use Gimp.
I spend a lot of time talking about how FOSS could be a lot more usable to end users, and technical supremacy isn't it. If you showed your average end user an electron app with an intuitive, professional design that gets the job done well enough, and then you show them the blazing fast linux native version with a typically awkward homespun interface, I will eat my hat if they don't choose the electron version. Sure, in a perfect world, all tools would be forged specifically for their intended purpose. In reality, you are in a miniscule percentage of people that would rather have nothing than something which doesn't perform optimally because of it's bonkers architecture. But if you actually want to maximize the usability of any giving tool, the only reason developers automatically go to performance is because to a hammer, everything looks like a nail.
> IT shouldn't be a default, but if it's that or nothing-- as it often is when it comes down to limited resources-- I think it's better than nothing.
Sometimes it is. Sometimes it's not. It's certainly an option that's very efficient for dev resources, which is often the primary limiting factor. It's certainly the only real option if you've already got a team of web developers, which is very common.
The current state of commercial software supporting linux with native apps is a pretty good indicator of how companies are viewing this equation. The amount of resources it takes to make a native java app is vastly different than the amount of resources it takes to make a native electron app. If you don't understand how that would be something that would open the possibility of supporting linux in many cases, I'm not sure what to tell you.
Look, business is about maximizing profits and minimizing costs. Business should absolutely not be looked to as an example of an entity that makes sane or suitable tech decisions over the long term. Their goals are different from everyday users, and both are different from power users and programmers.
Why should normies suffer through worse software than those that know what they're doing? Web-based "native apps" are that worse thing.
Reread what I wrote. I'm not saying they're sane technical decisions and I'm definitely not saying that electron is an objectively good architecture for user-facing apps, and I'm definitely not saying that profit is a good way to determine an ideal engineering strategy. But trying to discuss the viability of an option in real-word software creation without acknowledging that profits are often the driving factor in these decisions means you're not really discussing it.
Part of that is because profit should not take precedence when making a technical decision unless you are a business.
The other part is that if we accept only things that lead to easy profit, we'll avoid all sorts of things that take more initiative but become better products. Short-term stock price chasing is not a way to make tech decisions. It's a way to make profit decisions.
I seem to be on a website where nobody can picture doing anything without taking money from someone else.
> I seem to be on a website where nobody can picture doing anything without taking money from someone else.
And by the way, I'm not remotely capitalist, but I don't have the privilege of expecting the rest of the world to operate like I wish it would. In the US, unless you're independently wealthy, you've got to work to eat and feed your family, as I do, living together in a tiny apartment in a modestly priced city. Unless you're well-supported enough to spend a ton of time volunteering, which I am currently not, you've got to pull in money for what you do far more often than you give it away. Ignoring the realities of resource distribution in our society means you're more interested in patting yourself on the back for political and ethical purity than actually making a difference in people's lives. I've met a lot of people like that in political circles: their parents supported them and they were more interested in 'cred' and feeling cool than progress.
After years of hoping volunteer-driven FOSS would revolutionize our world, I now realize that the preponderance of developers are way more interested in coding as an intellectual exercise than solving real people's problems with their code. Think I'm wrong? Try putting in a PR, or heck, even a comment on an issue proposing some way to make something easier for non-technical users, and if anybody even responds, it will be to flame it into outerspace while essentially saying "they should just RTFM," or reflexively bikeshed it into oblivion. But what about all of those amazing well-loved heavily used FOSS apps popular among non-technical users, you might ask? Blender? Signal? Tally up the ones that aren't grant-funded and managed by people tasked with making the biggest impact possible with the resources they've got. I'd be pretty surprised if any of them had the luxury of choosing their tech stack completely independent of the resources required to develop with it.
I'm betting you haven't actually had to make decisions like this in a role where you were responsible for getting the most out of a limited set of resources. It doesn't even have to be commercial: even in well-funded non-profits that I worked with, maximizing the impact of your work is more important than maximizing the quality. If we were talking about shaving 5% off labor costs to put the extra trim on the shareholders' Lamborghinis, that's one thing. If good-enough quality will make triple the impact per grant dollar spent, the correct answer is pretty clear. If technical purity means the project isn't feasible with the allotted budget, it's even clearer. It's about how much you can do with the amount you have. Nobody who gives you money to promote childhood STEM education is going to care how much more technically correct your solution is if you run out of money before you can ship, and they certainly aren't going to give their new cancer-research grant a haircut because you wanted to save on end-users' ram usage. Using the existing giant labor pool of web developers to make an electron application isn't just a little less resource intensive than making a Java application for nearly any purpose-- it's vastly less resource intensive. That's just plain old reality. I'm not saying it's good, or worth praising; it's just the way our world works outside of volunteer-driven FOSS.
When I got paid by a nonprofit to work on MIT-Licensed FOSS full-time for years, these questions were as important as anywhere else where resources aren't free. If I had my choice, I'd have chosen Elixir and Phoenix to do many of our web projects because BEAM had built-in tools to solve a lot of the odd architectural problems we had. But if you suddenly had to hire a few Elixir developers to do something that would work fine with Node.js, just not as elegantly, that grant isn't going to get any bigger just because I'm making sound technical decisions. At the end of the day, your software is valuable for what it does for people-- not what it is.
And really, how much does using electron limit the actual utility of the tool itself among people with limited compute resources? I don't mean what's the difference in the memory footprint, I mean how often are people unable to solve their problem using an electron app because their computer can't run it? You obviously don't need bleeding edge hardware to run an electron application as hyperbolically suggested-- it's not much different from using, well, Chrome. Projects that work with really really under-resourced populations, such as the houseless, don't make desktop applications anyway-- one that I can think of off-the-top of my head used SMS because it was so much more readily available than a computer you can install an app on.
Developers like to focus on things like memory and compute resources because they know how to address them. Hammers looking for nails. But if you really dig into users' biggest frustrations with software, as I have in my interface design work, performance is almost never mentioned. When it is, they're almost exclusively dealing with shitty internet access, not slow local application execution. If FOSS projects wanted to do more good in the world rather than having a hobby project to nerd out about, they'd actively solicit designers to figure out what problems real users are actually running into trying to solve their problems rather than just assuming the solution is more technical correctness. They could be one of the few volunteer-driven FOSS projects isn't solely usable by people who already have a working mental model of software development.
I think that anyone who recommends Electron to a young developer interested in learning native application GUI programming should be slapped. It's the wrong tool for a specific job. That's a pretty niche use case to judge overall worthiness against, though.
I suspect the biggest issue with electron is that it leads to lots of devs packaging various V8 versions individually with their app. On windows they have been trying to get devs to switch to something called WebView2 where the OS provides an electron compatible chromium where unlike electron the resources are centrally managed by the OS.
Targeting only a hand full of frozen releases is not abi stability. Abi stability is when any app can be compiled off any release and run on any future release (and ideally older releases too).
> Abi stability is when any app can be compiled off any release and run on any future release (and ideally older releases too).
Hang on, if that's the standard why is MacOS getting a pass? I'd believe that Windows meets that bar, but I see posts on a routine enough basis about Apple forcing rewrites, unless I really misunderstood something there.
Apple's actually quite good at this, but they do break things on purpose from time to time for reasons which they announce pretty publicly at WWDC when they do (32->64bit, deprecating GL, etc).
So, for example a dev can target an app at iOS 8 and it still works fine on iOS 17. Thats almost a decade of OS updates that didn't affect an app. Here's an example:
Similarly, it’s possible to compile a Mac app that targets PowerPC, x86, and ARM supporting all of the versions of macOS implied by that spread of CPUs.
X Lossless Decoder[0] is one such app, supporting all three archs and Mac OS X 10.4 up through macOS 14.x (current) in a single binary. It’ll work just as well on that circa 2000 400Mhz G3 iMac you picked up from a local yard sale as it will on a brand new M3 Pro MBP.
The Apple ecosystem of platforms take steps forward all the time, and they do a pretty good job at keeping binary compatibility with releases decently well while they are at it. They partly do this by only shipping C, ObjC and Swift platform frameworks though.
Maybe I'm just too ignorant to know the pattern that determines whether I need to specify 'dev' and 'version' on a package or some random trailing '1' or '0', but the first linux distro with a sensible and consistent naming scheme for packages is the one that wins my heart.
The number is a version for when they provide more than one that can be installed at once (excepting where it's part of the library name, presumably like xcb1).
I.e. at some point you could probably install libgtk-3-0 and libgtk-2-$something at the same time. They likely leave it that way when they get rid of libgtk-2 so that existing tutorials that reference libgtk-3-0 don't fail because the package is now libgtk.
The libwayland-server0 one does get me, I don't know why there's a 0 at the end. I do know that in /var/lib there is often the same library with various .$number endings, but I've never looked into what that accomplishes.
Except something as fundamental as libwayland should not have an ABI version because the goal should be to never break the ABI. Well, it should also be called libX11.so.6 and extend that ABI instead of making a new ABI.
Separating binaries from sources is a Debian convention. If you're just using a library, non-dev is sufficient. If you're developing with that library, you'll also want -dev, which will install the headers.
Nothing special about the trailing 1 or 0; that's just part of the package name/version.
Arch and its derivatives (mostly) don't have any of that.
Though -dev means stuff only needed when developing against it.
Number means major version number (so compatibility number). This way you can easily install multiple major versions as dependencies for different packages without clashing or breaking.
You might but most people don't want OpenSSL headers cluttering up their system just because they installed something that needs to make TLS connections.
If you are building a program, install the -dev package. Numbered -dev packages are usually not relevant; there should be an unnumbered one that forwards if needed.
During the build of a program, you record which numbered suffix (= ABI version, should be the same as the .so.N suffix but that's not how you look it up; this allows you to install multiple incompatible copies of the library and they will be used by the appropriate, though distros usually prune these after each major release) is used. This should be automatic. (if there's more than one number, all but the last are part of the library name itself. There might be exceptions for BSD-style libraries?)
When installing, you should be automatically using the dependency you recorded during the build.
This is one of the biggest issues holding the Linux platform as a whole. It's often cited as a strength, but I don't see it as such. Many times, the plethora of choice is pushed forward by devs as an advantage of Linux ("you have so many choices to pick from!") but for many users this presents the paradox of choice - and pushes many people back to the platform they came from.
You get less choice on macOS and Windows, but the choices available are much more polished and less fragmented.
i.e. How many different tiling window managers do we need? Can't we take the best tiling wm's and start developing better docks and applets for them? How about apps and launchers that integrate with the tiling WM paradigm? Instead we end up with 10 different varieties of tiling wms and half-baked half-assed workarounds and programs for them.
People don't make choices by exhaustive analysis they consider a small quantity of options and by popularity or proximity select one and run with it. Successful ecosystems make it reasonably to pick a highly visible option and come out with something good enough. The proliferation of Linux distros is largely small difference that don't meaningfully decrease fitness because only a tiny minority who themselves had little problem finding a distro think its an issue.
By contrast issues like Wayland wherein your plumbing and hardware suddenly matter and you have to become at least somewhat familiar with the plumbing in order to select a usable set of options DO matter because they increase the chance of crapping out when you stroll down the aisle metaphorically and pick a box and end up somewhat satisfied.
It honestly makes little sense to compare Mac or Windows to Linux. In the eyes of 95% of consumers an OS either an immutable characteristic of a computer or a feature thereof and they aren't interested in it as a separate product. They might well buy a computer with that OS if the overall package makes sense but they sure as hell wont install it on their windows or mac machine. Those who are technically literate enough to be satisfied are unlikely to be confused if people add 3 more Ubuntu derivatives and 2 more arch ones.
But writing a tiling wm is fun. Writing software collaboratively is the point in itself, not just the means of getting some kind of artifact in the end.
Being an open source developer means having agency over your goals and means to get to them, which you often don’t have when doing commercial software development.
You sure end up with roughly edges, but the. IKEA-effect kicks in and you like it anyway.
Right and this holds the platform back. I'm all for hobby development- I've been an supporter of open source software for over 2 decades. But nothing much has changed since that time in terms of UX and user-friendliness. (Yes I am aware a lot of things have changed, but as much as things have changed - much of it is still the same.)
It doesn't hold the platform back because if the hobbyist didn't scratch his own itch he was never going to spend 1000x as much development effort creating a Photoshop competitor or even yet join the effort to triage gnome bugs. We would almost certainly just be less their contribution for no benefit. This whole faulty analysis relies on treating open source developers as a fungible resource like employees of a firm that ought to be tasked with something different. It's not so.
Agreed. If I couldn't write my own packages for my OS and scratch my own itch, I'd never bother contributing to the big stuff.
The appeal behind libre software is the power to go your own way. If the only way to access it was to interact with groups and get your things added through a political process, I'd have found some other thing. Too much of our world is wrapped up in processes and busywork and other empty interaction, it's a waste of time.
Give me the code, the license, a text editor, and a nice cup of coffee, and I'm good. Hold the interpersonal drama. :P
This is a problem that is meant to be solved by distros. People who don't want many choices should be using a distro that makes a bunch of choices for them, instead of pushing to remove choice from the ecosysytem altogether.
Everybody using the same software creates a monoculture which is more prone to attack than a diverse ecosystem. There's a reason Windows was targeted so much by viruses: not only was the target BIG, but it was also consistent. If you broke into one normie's Windows machine, you could probably break into the rest. You're going to have a harder time of that if you target Linux.
Also, it's libre software. Unless you pay someone to do something for you, most of it is volunteer activity. And who are we to tell someone what they should do with their volunteer time?
It's also an indictment of the dominant social structures if there are all these groups and nobody wants to join them. They could start by getting rid of Codes of Conduct that give them permission to punish you for out-of-project behavior. Try that.
> People who don't want many choices should be using a distro that makes a bunch of choices for them, instead of pushing to remove choice from the ecosystem altogether.
You don't need to remove choice to push better UX and software. It's not a zero-sum game here; we can foster better conversations with people in the Linux community and start talking about what would make the experience better for the average user, not just technically proficient power-users.
> Everybody using the same software creates a monoculture which is more prone to attack than a diverse ecosystem.
That's wrong - you're using a sort of strawman argument. What makes Windows a monoculture? There's a much larger, thriving developer community on that platform with many more developers making free and/or open-source software.
> You're going to have a harder time of that if you target Linux.
"Achkshully..." Linux has had some serious bugs, but the damage was relatively low to end users because of it's low adoption rates. It has very little to do with "Linux is more secure because we're diverse" and everything to do with "1% of end users use Linux, so the target is less likely to be worth my time." And furthermore, that 1% - the majority will be more technically proficient and less likely to be tricked into running arbitrary software.
> one normie's Windows machine
This. Is. The. Problem. Framing an end-user as a mentally deficient 'normie'. Why? What's the point and what's to be gained from this?
> And who are we to tell someone what they should do with their volunteer time?
No one is suggesting we tell hobbyists / volunteers what to do with their time. What people are suggesting is that we spend a little time talking about tackling common problems together, rather than re-inventing the wheel every single time.
> This. Is. The. Problem. Framing an end-user as a mentally deficient 'normie'. Why? What's the point and what's to be gained from this?
Most end-users are, okay? If anything unusual happens at the computer they come running for the nearest nerd, poorly explain the problem, waste the nerd's time when it turns out it was simple, and the most important part, they don't learn from the experience.
It also serves as a useful distinction between types of users. Normies browse the web, maybe use an e-mail client, some office software, maybe Steam or Zoom, etc.
Power users want to explore the fullest capabilities of their computer. They're going to want different software and different experiences from the normies.
Developers will want to explore how much they can do and control. That's another type of user with different needs and software preferences.
Frankly, I'm not interested in talking about the needs of average users, because more average users will cause communities to believe their software doesn't need advanced features because there are greater numbers of simple users. Flooding Linux with normies won't make the software any better. It will create an Eternal September problem where you'll get loads of criticism and bug reports and problems, but almost nobody from that crowd will have the insight necessary to help devs decide on solutions.
There really isn't any third option. Everyman software ends up toddler-fied, and bespoke software overloads the normies. Both groups cannot be served by the same software. I challenge you to find any piece of software that adequately serves both groups.
I would say there are even more launchers and docks then tiling WMs.
Also I don't know what would need to be better about them. Something like ulaucher or Albert is much more powerful than what is available in other platforms and I don't know how one would improve their UI.
Maybe the work needs to go into developing better UX, not so much UI or what the various tools can do. Look at PySimpleGUI (of which I was reminded of from a recent post here) for a good example of a nice project for the masses - we need better software for users.
More software? Wonderful! I'm all for it, but before starting something from scratch why not contribute to something that already exists, or possibly pick some project that was abandoned or simply needs to be worked on for example to be built with newer compilers, run on newer hardware, etc. Which makes me wonder if there is somewhere a database of dormant/dead projects that would deserve to be resurrected.
That something that already exists may not be built in the way you are aiming for.
That other something might be led by a dickhead that you can't get along with, or a community that's not receptive to your suggestions or patches.
Social processes and blockages can lead to losing motivation to contribute. Not everyone is cut out to be this happy-go-lucky team player that has no personality.
As for myself, I'd rather start from scratch because 1) I'll control every variable, 2) I don't have to mess around with some existing social and tech infra, 3) I don't have to talk to anyone and convince them my ideas are good, 4) who wants to spend more time arguing or discussing than writing code?
The solodev experience is just better. You're not hamstrung by politics and social process. Being a team player has only led to misery for me.
Yes, I would echo this. In most categories there are loads of subpar efforts when putting all that effort behind the top few would yield a handful of excellent apps.
Collaboration can seem daunting and hard work, but it's work that will yield results which is time better spent than that devoted to projects that end up abandoned.
I'm not convinced these efforts to put everyone's ideas in one place and one software are good. Often, design disagreements arise which are incompatible. Under this model of "everybody love everybody" lameshit, if you can't get the group to accept your idea, it doesn't happen.
You don't run into this problem when you write alone.
I wonder if any of the major Linux package repos collect statistics that let you search for apps that are both (a) widely used and (b) unmaintained/feature-incomplete.
There's a lot of complaints that there isn't much in the way of tooling to create cross-OS compatible apps but I disagree. Just looking at solutions which aren't Electron:
- Telegram uses Qt and ships a performant native app across all three OSes
- Flutter compiles down to native code across all three (and mobile)
- Kirigami is a QtQuick framework that will give you an executable app across all mobile and desktop targets
Go and build your app. There's no reason this needs to an excuse to flame on Linux.
Except Flutter is a subpar experience on every platform because of just how slightly different it behaves and Kirigami / Qt Quick is just not a viable thing, there's barely any community and/or "serious" project on Windows, macOS or mobile. I'm not trying to discredit the KDE folks, but that is reality. Also, Qt Quick licensing is not so easy (or very expensive if you go commercial) for iOS.
Whereas electron has a) big major corporations betting on it shipping extremely polished,
Production-ready tools and b) a gigantic community and library ecosystem, plus not any obscure programming language (such as Dart).
Telegram really goes the extra mile, though. Their client is amazing.
> Except Flutter is a subpar experience on every platform because of just how slightly different it behaves
I'm not sure this is true. If you check the pulse of social media, most Flutter devs think Flutter is great for Android, iOS and desktop but needs help on web. Canonical at least rewrote their installer in Flutter and anything new they develop is in Flutter. Devs are hoping that the WASM support currently shipping in `master` [1] will solve for web.
Same as you, most of the people using iPhones and Macs that I know.
Additionally, the official Telegram site points to the Swift client for its Mac download (the Qt client is marked as “Telegram for PC/Linux”) and the Swift client is the first result on the Mac App Store, which would make it seem likely that’s the client most Mac users are going to have installed.
The problem is OSS software not even trying to compete with the market. People using OSS software taking it for granted that the UX is going to be subpar, and it really is. Regular propriety software faces the risk of their users not paying, therefore adapting to make end user experience great. OSS usually doesn't have that risk. Open source needs to be exposed to risk from end users.
I tried to change that with Notes[1] but I find it hard to live on solely on ads. I tried to incorporate paying for some premium features (like Kanban) but the app is still fully FOSS therefore everyone can compile it from source easily.
I think I'll close-source my next app[2] before it launches. I just can't risk not being paid for my hard work. I also believe the Linux community will benefit from that, since getting paid will allow me to invest more in making UX-focused apps on Linux. I might open source some parts of it tho (or maybe all of it in the far future).
There is a large intersection of Linux users and people who cherish their freedoms. For many people, using non-open source note taking software is a non starter because of the vendor lock-in and privacy concerns. Thus, you'll probably lose out more by not open sourcing the program than by open sourcing it and risk having some people not pay. I wager a lot more people is willing to pay than use proprietary software. Also note that open sourcing doesn't require you to distribute the code to everyone, you can give source code only to paying users if you are charging them money, though they will have the right to modify and redistribute. If they don't care about open source, they might as well just use Notion or Evernote. If someone is looking for an open source state of the art note taking program, I recommend Logseq.
I'm a big believer in open source software. I've had wonderful contributors and I'm using qutie a few open source libraries/components in my own apps. That's why I open sourced my own app.
I've tried many different options so I could make a living with it (ads, donations, premium features). There are definitely more ways to explore. But for now, I need to ensure that my financial situation is stable before anything else. Then, maybe in the future I'll be able to afford tinkering with a sustainable way of open sourcing.
I've also started to notice many VCs starting to invest in open source software (just today saw Godot featured on HN). So that can also be a potential route.
It's great that you generally believe in FOSS. You deserve to profit from your work. I wouldn't be opposed to paying (the app looks interesting). But I'd definitely want the security that, should you abandon the project, someone will be able to perform some maintenance. This is why I'm fairly okay with releasing the code late or licensed as "FOSS if unmaintained". Ardour's model of charging for binaries doesn't seem too bad either.
There are some programs that sell binaries but also give away the code.
The premier pixel art editor Aseprite is free (source available, not open source) if you're willing to build it from source yourself, but most users buy the prebuilt binaries. I think this way you satisfy both the privacy conscious / cheapskates, and also the developer's economic needs. RMS doesn't approve I'm sure, but you can't please them all.
if the license is GPL you can give away the source and sell binaries. you can even restrict giving the source to those that buy your binaries.
red hat is doing that.
the only limitation is that you can't prevent anyone who has the source from building it themselves and giving away those binaries. but most people who do pay for the binaries won't do that. so whether this strategy works depends on who is your target audience.
if you are targeting developers you probably won't sell much. but other audiences may be fine. i don't know.
I think you're making the right decision. Open-source is extremely difficult to make even a living wage off of - the number of people who do is orders of magnitude smaller than those who make a living off of commercial software.
It's definitely possible to make a decent living with FOSS software. One of the main contributors to my note-taking app is @bjorn[1] who developed the awesome Tiled editor[2] and managed to figure out a sustainable way to earn a living from donations/sponsorships. There are probably more instances of that. I tried to go the same route, but other than a $5 monthly payment on Patreaon (thank you awesome contributor!) I couldn't figure it out.
I hate serving shitty Google ads. They are also not a stable source of income and the revenue is low in my case. It's time for me to move on.
Oh, yeah, it's totally possible - just do a HN search and you'll find people who are doing it (e.g. Andrew Kelley, the Zig designer - and he's making a programming language, which nobody will pay for in this day and age!).
However, like being a social media "influencer" (e.g. Twitch streamer), it requires a lot of both hard work and luck, has a limited amount of "capacity", and isn't a viable source of income for most people.
>Open-source is extremely difficult to make even a living wage off of…
So are you saying it’s even possible? I wish to learn at least one option to do it. Few is even better. How at all you can realistically make living this way?
Ok, not living , just partial income.
Ok not partial. Some income?
I am not making fun of I honestly look some way .
I understand that if you maintain 10 years some essential part of essential software project then people eventually come to you for consulting or will pay you to shift this part toward their requirements.
But other then that? Like can you write useful free software program and really have some money from it ?
The main options that I'm aware of are: selling support (RedHat), donations (I can't recall examples but they do exist), open-core (GitLab), and paid hosting (WordPress). In the past, there was also the option of selling physical media (which the FSF did for GNU projects, for instance) but that's been killed by the Internet.
All of these models are very hard to get working (e.g. the donations model requires that you have a very large number of people interested in your work who are generous with their money, kind of like Twitch streaming), but they can work. They're not an option for the majority of programmers, though.
I understand your frustration, but I have a couple of counter-examples, and I don't offer them in any derogatory way.
1) Notes looks a lot like Standard Notes. If you built Notes using copy-left, OSS it's hard to complain unless you pay "royalties" to all the programmers whose work was incorporated into yours.
2) I am probably atypical. I have used Linux for over 20 years, but I can't write so much as a Bash script or a Perl one-liner. Yet I have spent a whole lot more money on software than I would have (or most individuals do) on other platforms. Because I can't code and appreciate the work of others and the freedom it gives me I support the following (not exclusive):
* Kaisen (my distro) and Debian (the distro Kaisen is built on) both
* tFSF for both core utilities and Emacs
* My DE (KDE), and even if I prefer the UI of other applications I mostly try to use those provided by KDE,tFSF, Debian, or Kaisen (e.g Akregator over RSSGuard) unless I donate, like:
* Mozilla (for Firefox), but
* Betterbird for my email client
* Syncthing
* LaGrange (gopher/gemini)
* Joplin
* ClipTo
* and more, including services (e.g. SoulSeek, envs.net)
Some annually, some monthly, some one-time.
In fact, when Standard Notes first came out I paid for a seven year subscription, and then ended up using it for about 2 months before becoming dissatisfied. There are some, like Alacritty, that I use extensively but which I have found no way to donate to, but I try to keep it to a minimum.
There are some amazing people that valued what I'm doing. One person even donated $1000 years ago. But that doesn't make it sustainable.
I think it's time for me to compete on the market. I believe Plume is going to be a good addition to the pool of productivity apps today, but I'll let people judge that when it's released. I hope you will sign up to get an update[2] and let me know what you think of it when it is released.
BTW, are you suggesting I copied Standard Notes? Not really, I don't like their design. Notes is in active development since 2014 (first commit on GitHub in 2015), back then I didn't even know about Standard Notes and the core design barely changed.
Personally I think KDE is light years ahead of macOS and Windows.
In fact I find macOS to have been be of the worst window managers of all the popular platforms (sure it’s pretty and easy to use, but trying to do anything beyond the basics requires magical incantations that are impossible to discover organically).
Well, I'm a Qt programmer (and love it!), so I know a bit about this ecosystem. Unfortunately in terms of UX and aesthetics, KDE apps don't come even close to the macOS ecosystem.
Can you share some of this ‘love’ to me? Because my experience with QT so far was : it does ‘who knows what’, ‘who knows when’ and ‘who knows why’
And if you wish it to work the best tactic is “do not change anything, it will surprise you and there will be no way to go back” :)
May be I am missing something important in understanding? I was trying to tweak some projects for my needs. I successfully did it but it was a nightmare.
If you learn how to separate your logic in C++ and your views (GUI) in QML you can achieve the best of both worlds. C++ is fast and I love programming with it. QML is easy and powerful you can create slick looking apps with it with beautiful (and easy!) animations.
I bought the Udemy course of Bryan[1] and learned QML in one day. The next day I already had a prototype for a Kanban[2] that is based on Markdown.
Thanks! I know, it's totally possible to create beautiful apps using Qt. The problems are:
1. Most people in OSS don't care about UX and aesthetics (most of the people using Qt)
2. The Qt Company's examples are absolutely ugly (with the exception of one). If that's what you show developers they would either not believe it's possible to create beautiful apps with it or that it's too hard.
KDE is the definition of all UI and no UX. Like there's no one actually planning out how a user would/should actually use the thing. That's why I generally prefer Gnome despite it's faults. Even if I don't agree with every decision and lack of customization at least it feels like there's some vision on how it should be used.
It is very naive to believe that professional equipment do not benefit from well-thought design and UX.
Design does not have to be fancy colours and flashy animations. Design does not mean Fisher-Price UI and massive amounts of whitespace. Ask Dieter Rams.
The problem is that most design and UX is not "well-thought" but just cargo-culting and bad intuition. Almost never is there any real user study to show the design improves things in the real world.
> It is very naive to believe that professional equipment do not benefit from well-thought design and UX.
I know. I never claimed otherwise.
> Design does not have to be fancy colours and flashy animations. Design does not mean Fisher-Price UI and massive amounts of whitespace. Ask Dieter Rams
I never said it did. You’ve completely misinterpreted my comment.
My point was that commercial operating systems have design specialists and the end result is they suck for a great many professionals. Whereas KDE, despite “not having a design specialist” (I don’t actually know if that’s true, just quoting the GP) feels like it has the same well thought out design that specialist technical hardware does.
I’ve worked closely with designers (because I have zero design skills myself) and seen first hand how an arguably less beautiful design can actually be more intuitive for users
So at no point am I claiming that design is just about fancy graphics. If anything, I’m making the same point you were.
Can you elaborate more? What features you are talking about?
I can criticise Mac OS endlessly and was even actively downvoted here when reported facts of my personal experience with bugs on newly purchased M1 (you can dig the history if you wish). I was shocked how quality degraded in my opinion.
People here were even claiming that it’s my specific faulty machine was to blame. This ‘faulty’ machine by the way works to this very day without any hardware problems except flickering of the monitor which is idiotic in my opinion design decision (to use PWM).
So as you can see I am very critical of macOS and consider using Windows after macOS like completely unbearable. And yet your comment have puzzled me.
When I was learning macOS after years with Windows I remember that certain things were like you say “impossible to discover”. Yet with all that being said I find other GUI implementations in GNU/Linux not any better.
Perhaps I’ve missed something in KDE? What did I miss? What “doing anything beyond basic” means in your usage? What kind of tasks/workflows are those ?
How KDE better in those?
1. I've never heard of your app, and I follow this space closely. It's extremely competitive.
2. I clicked to check how much you're asking for premium. It's reasonable on an annual basis, but I only saw an option for a monthly subscription. That's not happening.
3. I'd be more likely to give you a donation than to pay for a premium version for an app like this. That model has worked well for Obsidian. I don't think getting early access to features is worth much, but a lot of people have paid them to support their work.
4. My observation is that a closed core product with a large open source ecosystem around it gives folks a reason to pay.
> but I only saw an option for a monthly subscription.
What do you mean? The annual subscription doesn't work? Or something else?
> I'd be more likely to give you a donation than to pay for a premium version for an app like this.
Well, for my Notes app, I can understand. Hopefully you will find my new app as a breath of fresh air in this space. Sign up to get an update please! I would love to hear what you think when the app is out.
> My observation is that a closed core product with a large open source ecosystem around it gives folks a reason to pay.
Overall, I think people will pay if an app gives them enough value.
There are definitely some challenges with getting people to pay for OSS, but I wouldn't be so quick to blame that all on what happened in this situation. I think you'd be experiencing the same (or worse) if your app was closed source for reasons I'll mention in a minute.
I'm the exact type of person who would be in your target market and likely to pay, but aside from having never heard of your app (which may be your biggest problem), these are the reasons I'm not buying. I'm only sharing this in the hopes that it helps you, not trying to make you feel bad or anything like that. Overall I think you're awesome for making your app open source, and I have a mountain of respect for you for doing that.
1. You're targeting a very crowded and competitive market where top-quality options are 100% free. This would be a huge challenge regardless of whether you're open source or not.
2. Your app is still very young/new. Winning in such a saturated area is going to take some time to even build awareness, let alone get mature/feature complete enough to get people to change.
3. With something like notes, you're looking at people with substantial pre-existing sources. Migration is not trivial either, so there's a big barrier of entry there for people to use your software.
I wish you the best, but I think you'd only be harming yourself by closing your next app. You definitely lose people like me for whom my notes are incredibly important and I won't risk losing them to a proprietary tool. Open is mandatory for me to even consider it.
I'm pretty happy (maybe even in love) with Logseq, but the performance is definitely a downside and the idea of a C++ app is highly appealing to me, so I'm going to give your app a try.
Side note: If your GUI was compatible with logseq's on-disk format, that would make your app quite compelling to me
Notes has been quite a success for me. With over 1,300,000 downloads[1], and being featured in awesome tech websites[2]. Just on Ubuntu alone there are 7000 weekly active users (that have downloaded it from the Ubuntu Store).
My new app Plume is going to be a big improvement over Notes. First, your data is and will always be as simple as Markdown/plaintext. My new block editor is based completely on plain text while still offering advance views (with a minimalistic syntax). It's also faster than any other in its category, even faster than native apps on macOS! I'll soon put benchmarks on the website.
All I can say is, sign up to get an email update and check it out when it's ready. Most of the features will be free with only some advance features paid.
BTW, what do you mean by "logseq's on-disk format"? That it's local-first? If so, Plume is also.
Well dang! I didn't realize it was that big. That is impressive and definitely changes my opinion a bit. It's quite unfortunate to have such low paid numbers with that many downloads.
By logseq's on-disk format, it is local first but it's a stricter subset of markdown with a handful of extensions. The strictness is because Logseq parses all the markdown into an in-memory graph database, and in order to make that work with markdown the top-level has to be a list and needs some IDs and other identifiers in order to properly link/embed blocks and pages and such.
Thanks! With Plume I use a tree data structure (just with parent child relationships), although in the future I would probably use Boost::ptree.
The things is, the underlying data structure is just plaintext, so having an ID for each block will make the plaintext look out of place (say, if you wanna move your files to another app).
But I'll add some special syntax to allow this feature for specialized blocks.
> so having an ID for each block will make the plaintext look out of place (say, if you wanna move your files to another app).
Indeed, this is definitely a problem for Logseq. For example here's some of my recent Logseq notes. This is one page where you can see an "ID" (represented by `id:: <guid>` is created. Then helpful images/diagrams are displayed. This is then referenced later:
- Parts of a [[Neuron]]
id:: 64f4bdc3-29de-4b6c-bc41-20a5f052badf
- Dendrite: https://en.m.wikipedia.org/wiki/Dendrite
- {:height 411, :width 718}
- Axon: https://en.m.wikipedia.org/wiki/Axon
- {:height 471, :width 718}
Then in a different page, this is referenced by ID. In this case it is "embedded" but you can also reference and other possibilitie:
- #vocabulary [[Brain Myths Exploded]] #Notes
- {{embed ((64f4bdc3-29de-4b6c-bc41-20a5f052badf))}}
- Encephalization Quotient (not a good measurement)
That's actually not too bad, and similar to what I plan to do. They only create an ID for a block/bullet-point when necessary.
I just noticed Logseq doesn't support selecting and editing text across multiple blocks. It's indeed not easy to do, but Plume does support this, I believe it's an essential feature for a text editor. Notion took a long time until they supported this[1]. I implemented it in less than a month using C++ and QML, a testament more to the strengths of Qt than to my own skills as a developer.
> People using OSS software taking it for granted that the UX is going to be subpar...
After using the abomination known as Windows 11, I'd argue that the UX of Linux is well beyond par, especially when par for Windows 11 is showing Microsoft ads at you while you're using almost every application.
I am in an interim CTO position and the year before the title of a role I acted as had "Principal" in its title. I still write Commonmark files to render them to HTML and PDF with Pandoc.
There are dozens of Markdown note taking apps out there. There are half a dozen Notion clones out there. I know which one will overwhelmingly win mindshare due to a special feature but I won't give away why (the feature itself is more valuable than a Notion clone or even Notion itself).
You might be able to get paid for providing something valuable to others. Sometimes hard work is involved in providing value to others, but performing hardwork doesn't necessarily mean you have provided value to others.
> Open source needs to be exposed to risk from end users.
Why do open source applications need that? You said yourself that users of open source software accept the subpar UX, so what problem are you trying to solve? Making open source development somehow 'riskier' for programmers without simultaneously providing more reward would only drive programmers away.
Users of OSS software accepting the subpar experience is the problem. You don't think they look away to their macOS counterparts and wish these polished apps were available on their Linux machines? I used Linux as my primary device for 2.5 years, then switched to macOS and never looked back. Now I want to bring the same polished user experience to other operating systems using Qt. I believe my latest attempt [1] is the closest to get there without the awful sacrifices of Electron apps.
I really like the functionality of your notes app, but I don't think emulating Apple UI design is the solution to subpar UI on Linux.
Linux users expect basic desktop conventions like toolbars, dropdown menus, and not mobile design concepts like cramming everything into a hamburger menu.
Have you even tried using my app? It doesn't have any of the "mobile design concepts" that you're talking about.
> cramming everything into a hamburger menu
- Import/Export
- Check for updates
- Start automatically
- Hide to tray
- Change database path
- About Notes
- Quit
Is that EVERYTHING?
No. It's not. I don't like using hamburger menus for everything myself. This is why the options are minimalistic. Look at other Gnome apps using hamburger menus, no need to go as far as "Apple UI" (which actually discourages hamburger menus for desktops apps).
Plume's editor is a block-editor I wrote completely from scratch using C++ (model) and QML (view). This allows me to create an advance editor that can have complex elements (like Kanban, Columns, advance image components, etc) within the text. The performance using Qt C++ compared to Electron is a mssive improvement over current web apps (like Notion and the likes), and actually, i's even faster than native apps on macOS (Craft, Bike) and apps made in Flutter (AppFlowy).
Hows it different from Obsidian or any other note taking app that has a larger community. I'm sure OP put in lots of hard work and it looks good, but it's not apparent to me why I should abandon my existing platform (stickiness) for another one. There are a lot of note apps out there already which probably makes it hard to sell.
Thanks for sharing. Appreciate your fighting the good fight, and sorry for the results so far. Maybe apply for a Futo scholarship for one? But definitely close source for Plume.
No need to be sorry, really. I'm just learning how to navigate this world. I've been pretty lucky so far. Notes is one of the top results in Google (for the keyword "notes"). So I get a lot of traffic and a nice passive income. But it's not something I can fully live on (plus it's unstable and I hate serving ads).
on your ‘com’ site mentioned in github page download button does not offer (near by) payed options. Why? Why it is only in the separated Pricing link? Why not to present the same choice in download ?
I think it could be beneficial because it would be easier for those who wish to pay. They would not have to do extra step looking for that opportunity.
I personally do not like (in soft words) when button says one thing and delivers something else. So I think renaming Download into ‘Download options’ were you give choice download ‘ free’, ‘donate’, ‘pay for pro’ etc as you like is acceptable and feels good enough. I do not see why in that case it should reduce number of downloads but every prediction is always a prediction and nothing is better then the real life test.
Personally I think Joplin looks better and this looks like a poor macOS clone that lacks many of the features like markdown support or synchronizing across devices including mobile. IMO if Joplin’s problem is the UI, or even just something OP is passionate about, why not contribute and open a PR? Why is the pattern “I care about this one thing so I reimplemented it in a new app from scratch”?
In principle maybe, but all present implementations fall a bit short. Why can't I use my mouse to position the cursor in any shell? Placing the cursor with the mouse is something people commonly do in terminal editors like vim or emacs; the efficiency arguments against it fall flat because this is entirely optional and besides, many people use laptops with either a track-point directly in the middle of the home row keys or with a touchpad directly below the keyboard.
And there's no technical reason it couldn't be done; years ago I once did it as a proof of concept. It just hasn't made it into any major shell yet.. besides eshell anyway. It does work on eshell.
Anyway, it's not really a big deal, not important enough for me to pursue, but I do think it counts as a counterexample to disprove the "peak UX" claim. Command line interfaces in principle are great but in practice we're doing it by emulating hardware that went obsolete decades ago and piling cludges and hacks on top of that to make it feel a bit more modern.
Another example; there are a few ways to inline images in a terminal emulator but they're all kind of shit in their own ways; graphical thumbnails have clear utility when listing some directories; every serious file manager supports it but to do this in a terminal emulator is cludgy hacks.
> That’s only the last line, not quite what they stated
Then I wasn't clear, I am talking about using the mouse to position the cursor in the current line being edited. If I am writing a long command and realize I made a typo 50 characters earlier in the second argument, I should be able to click in the second argument to reposition my cursor. And yes, I do use vi-mode and know how to do this the vi-way with just the keyboard, but Vim also supports using the mouse for this and there's no reason shells shouldn't as well.
> The kind of person that would use it doesn’t use a terminal.
Nonsense—a shell is not a full screen ncurses app like the others you've mentioned and probably wouldn't want it to be due to having to carry the scrollback buffer.
You're the only person to implement this slower/problematic/nifty design in five decades, on a FLOSS operating system. Just a guess but maybe your attitude is also part of the problem.
Utter nonsense. The CLI lets you do complex custom things that you sometimes can't do with a GUI. But it is pretty awful from a UX point of view. Terrible discoverability, terrible UI, very unfriendly, no contextual help, etc. etc. I can't think of a worse interface from a UX perspective.
The manual exists for a reason. I find it much easier to type `man program` then `/whatever` until I find what I'm looking for rather than scanning the screen for the correct icon. There are also some conventions and after a while you get to a point where it's intuitive to use any new program. Just like with a GUI and icons, what does the wand do?
I'm sure there are better articles if you search more. If you have time for a book, I highly recommend "The design of everyday things". If you don't agree with me that book will change your mind.
You get used to it as in "used to how CLI programs are usually designed". Just as you get used to what the select icon in a GUI does. Without any prior experience in any both seem like alien technology, but coming from GUI to CLI you have the bias of already knowing one when comparing.
People claim macOS is intuitive and everything works the same, but I'm handicapped in it. I just call it inexperience not being bad UX.
Regarding the manual, does GUI programs have a unified way to access guides and documentation? Can I scroll to the bottom with G and find the configuration files and example uses?
The difference is, you can teach a user who's never seen a computer before to work with a GUI in an afternoon, maybe a weekend -- and if the GUI is well-designed, the skills they learn on one program will transfer well to others.
Meanwhile, there are XKCDs about how even experts can't remember how to work tar: https://xkcd.com/1168/
Before 1984, computer UX was, universally, sucky. As Alan Kay put it, the Mac was the first computer worth criticizing.
Ease of use isn't the only metric for "peak UX" though, composability and how the programs interact should also be taken into consideration. Selecting a filter on instagram is very easy, with the caveat that you're limited to those filters alone. The subject is more complex than clicking buttons vs typing commands.
That assumes that you can already articulate the action you are looking for in the program's vocabulary. For people who think more pictorially, they may have a sense of what they want but benefit from visual cues.
I don't target specific options, but different words of what I'm trying to achieve, crop/cut/extract as a pretty bad example. It takes me to the description of the option I'm looking for where it will also have useful extra options to combine.
I can't say the same for GUI icons, many times hidden behind dropdowns. If I hover it will say select, then I have to find how to crop that selection, behind another dropdown, probably a few levels deep. And none of them searchable. If I need help there's no one place to find a guide, my best bet is google or the website.
For both an intuition has to be learned from use, but GUI people act like they were born with computer literacy. I used to think the CLI was stupid as well, but now that I've seen the light I can't imagine going back.
A shell is a REPL. Though I do I agree there's something to using one with good features (ex. I would completely understand someone using exclusively zsh just for the better tab-completion).
That's what people who gained their technical competence in a CL environment always think: we're always most productive in the interfaces we are familiar with. I'll agree that most technically competent people probably prefer using the command line for many tasks, but correlation != causation. It's really easy to mistake our own preferences for being objectively good, generally... which is why designers exist, why most user-facing commercial software doesn't require reading a single line of documentation, and why most FOSS projects-- almost entirely indifferent to or hostile towards deliberate design-- are only useful to people who have a working mental model of what's happening under the hood.
I was strictly a vim guy for a decade and a half... my coworkers occasional snicker merely steeled my resolve, but I knew I was doing the pure and efficient and technically correct thing by sticking to barebones vim. Lots of vim stuff and ex commands are beneficial to my workflow, but after a coworker convinced me to try out some more modern options with vim modes, I realized that my attachment to vim as an editor was driven by my emotional investment in how many times I struggled with vim's clunkiness. I wore it as a badge of honor. I thought it gave me cred. In reality it just showed how utterly inflexible I was. The Jetbrains editors, for example, offered more functionality out of the box than I could practically cram into my vim setup, yet was completely configurable, and the features were generally intuitive and discoverable. I didn't have to look up how to do some relatively obscure thing in ex-- there was probably a menu option for it so I just didn't have to keep it in my head. Sure, vim is and always will be my default editor for light editing, and there are obviously people whose use cases are perfectly satisfied by using it. However, assuming for years that I was more technically competent for doing intensive professional work in complex codebases using that clunky old editor was plain old self-indulgent arrogance, like most other nerd badge-of-honor things are.
I've been using command line environments daily for decades, and usually head there first to do many tasks, but many of these archaic tools only persist because of a reverse eternal-September problem. By the time someone gets technically advanced enough to start influencing how our technical environments work, the curse of expertise obfuscates their downsides and they mistake their own comfort with something for it being objectively good.
I'm not aware of anyone who is making a distro who should instead be making an application. Maybe LinuxCNC if you squint?? But that has very specific kernel requirements which are arguably best served by a custom distro.
any desktop environment is effectively just an application that's trivially to install via one or two clicks or commands. Never made sense to me why there's derivatives of derivatives that just ship a different DE.
Stop listening to people who say to stop listening to people trying to tell you what you should do or not. It is not all relative, there are good and bad choices.
Not hacking on a distro because someone thinks there are too many is one of those bad choices.
If you actually manage to get a distribution rolling with a viable community around it, it means enough people have decided that there weren't too many.
I recently switched from Mac to Linux so that I can use beefy refurbished machines as my daily driver. I've installed Linux on lots of different machines previously, but this time it's not just a hobby.
Ubuntu has great support for my hardware and peripherals, but the app store feels unfinished and forced. Pretty much everything works as expected.
I'm interested in checking out Mint (also Debian) and Arch, but it feels like there's a lot of software written with Ubuntu in mind, so I'm wary.
Linux Mint is based on Ubuntu. You can use all Ubuntu packages just fine. It is really just a strictly better Ubuntu.
There is a Debian-based Mint version as well but that is more as a security so that they are not deadly dependent on the Ubuntu people.
Though these days, even using exotic distros isn't that problematic, as you can always use appimage/flatpack/snap if you native package manager doesn't have the specific app you need.
Mint looks awesome! It seems like it was designed by people that actually compute. My main gripe w ubuntu was the snap store, but embracing flatpak/hub seems like a highlight of mint
There is almost nothing written for ubuntu that won't work on arch or really most current distros. When I ran arch I did have to get familiar with assumptions people make due to ubuntu, but this would never be more than patching a config file or configuring some environment variable. And those are useful and transferable skills.
hmm. thanks for sharing. that's the kind of stuff I don't want to do though. i already have my own software to write and use a wide array of tools. i can't afford to lose a morning on that kind of stuff. plus i want to be able to tear down and rebuild without a ton of custom config
I used AWS Workspaces for a while and that ran Fedora. It was nice, but I wasn't blown away. I seem to remember not being able to find some yum packages where there were apt counterparts.
> Too often they fall into the trap of creating more Linux distributions. We don’t need more Linux distributions. Stop making Linux distributions, make applications instead.
I mean, kind of? But "shipping the developer's computer into production" implies a lot of things that really shouldn't be happening even with containers - the images you ship in prod should always come from git (or such) by way of CI, and shouldn't include ex. compilers. So I'd argue we managed to mostly get the upsides of just shipping the dev's machine but without the downsides.
There are plenty of blog posts about how containers work in Linux is still dependent on external resources, unless care was taken when creating those images.
It's not an attractive scene for GUI developers on Linux. Gnome, Qt, and Electron all have some unappealing aspects. Many people who get the itch to make a Linux GUI app will immediately be discouraged by having to choose between them.
What's wrong with Qt? It works well enough if you want to write C++ or Python. Electron works on Linux the same way it does on Windows. GTK has some more issues on other platforms, but mostly works on Linux too.
I should have said "Many people" instead of "Anyone" in my original comment -- edited.
I would use it myself, but a lot of developers don't want to use C++ (too complicated) or Python (too slow, dynamicness becomes unwieldy for large projects). Languages in between - like Swift and C# - are the sweet spot for desktop GUI apps.
Imo, I think that Flutter is one of the best ways to make native Linux applications at the moment. It can look just like a GTK, macOS and Windows app and you have a more enjoyable experience.
Don't know it, but from the website it looks mobile-first and intended for simple apps like Spotify. Don't get the impression that I could build something like Blender or Kdenlive with it. Maybe my impression is wrong, but I would hesitate to use a framework for a purpose that the developers consider fringe.
Have you tried WebUI? It's like a ultra minimalist version of electron. It uses available browsers instead of shipping its own. You simply write your App in html/js and everything that goes beyond (e.g. file access) is delegated to C with a simple callback mechanism.
I'm definitely for more guided resources on making Linux apps from idea -> release, which I wish TFA was more of.
I think there's a common misconception that if the developers of ten similar apps would just pool their resources, then one great app would result. Fragmentation certainly can be a problem but I feel like this is some kind of corollary to the mythical man-month.
Segmentation has been tough for a Linux newbie like myself. I hope there can be greater unification some day so that applications have one install type, and come with greater predictability.
It's still annoying even if you're experienced with Linux. Especially if there's multiple install methods for the same app with different levels of support.
Flatpak is the closest thing thus far, but certainly hasn't "won" yet.
This is actually a solvable problem. You provide a single store interface where you install flatpaks and traditional packages. Linux Mint does a pretty good job of this.
With Arch you can basically rely on system packages and the AUR for 99.9% of everything.
Either way wrapping an interface around 1 or more method is a fairly trivial matter.
It doesn't solve the multiple install methods issue.
Fedora for instance already has this via Gnome Software, but provides a drop-down if multiple sources are available for an application, so it's still fragmented. Even if you removed that and had some sort of heirarchy, one app might have a better experience via Flatpak whilst one might be better via RPM.
Until developers consolidate it's really not solved, IMO.
Somebody I know (not in tech) wanted to switch from Windows to Linux and I said:
"Which one?"
It's the same as Mastodon.
"Which instance?"
The answer is usually the biggest / most popular instance or distro, but not many applications the masses use are on Linux, the same as there is not many interesting people on Mastodon for people to switch over to it.
Applications are the barrier to mass adoption here for Linux unfortunately, and the paralysis of choice is the double edged sword for the massive amount of Linux distros.
Thousands of years of effort spread across many distributions that not many people would use.
The Linux ecosystem isn't as splintered as people think it is. Yes, there are technically lot's of distros but most of them are really just derivatives that don't add much.
The vast majority of Desktop users just use Debian or Debian-based distros like Ubuntu, Linux Mint and so on.
So in the end, it is just a matter of whatever pre-installed software you get. It used to be that vanilla Debian was a bit hard to use but I don't think that is true anymore. (Though I still recommend Linux Mint for the absolute "just works" experience.)
For 90% of users, they are not going wrong using Debian or one of the popular Debian-based distros. After that is is basically just a matter of specialized needs.
Want to make administering your system a hobby and part of your personality? Have some Arch.
Need professional support? OpenSuse.
Need reproducible builds? Nix/Guix.
Need a minimalist distro that runts on potatoes? Puppy Linux and whatever else.
Wayland vs X11? Systemd? If you know these words, you are already way too deep in the rabbit hole.
This is true for the user, but not good enough for developers, who unfortunately need to stay on top of all of this to build functioning software. If users report bugs you can’t repro, you may have to dive into all of these.
And that’s the point of the article really, to try and get more app developers onboard.
Developers don't need to test in every distro, or even all the most popular distros. All they need to do is test in one distro. It will probably work in most of the rest and linux users expect that if something breaks they'll be the one tweaking things to make it work. Sometimes you'll get bug reports from those users saying "I had to do X on distro Y for reason Z" and you can act on those reports, or not. It doesn't matter.
Honestly, linux users don't expect more. There's tons of proprietary software out there supporting linux in this manner and it works out fine. Linux users are easy to please. The hardest people to please are the developers with Windows/Mac backgrounds who mistakenly believe they're expected to test on every distro and rightly balk at that.
I'd personally say "use mastodon.social", but your advise is just as good. Simply because it is a solid advise, offers a single option and points people at where to start.
Ubuntu is good to start¹, and if people say "but it's bad at thingy foo" or "it has default bazoombas! default! on!": no-one said it's where you must now commit yourself to forever. Distro's aren't religions (and even religions can be switched), they are just stuff that sits on your computer. Your next re-install or next machine might be anything else.
¹edit: to be clear: it's also good for experienced people or grumpy bearded linux-veterans like myself (24 years in and counting). I use Ubuntu. It's hardly configured even, just the defaults (well, my shell and vim have some 20+ years of config tweaks).
Because they already have an opinion on which Linux or Fediverse thing to use.
Pay it no mind. People are allowed to be wishy-washy with their opinions. Maybe its just a conversation starter, or maybe they enjoy debate / feeling things out through words and discussion.
> Thousands of years of effort spread across many distributions that not many people would use.
That's a lot more effort towards improving the commons than we'd have gotten if doing it your own way were discouraged. Effort is not fungible, when you squash somebody's creativity it stays squashed for a while.
Unfortunately it still takes a lot of effort to actually switch and not just poke around to see what's what. Most things do work these days but inevitably you will have to open the dreaded terminal (or be forced into runlevel 3 because 4 now fails as your new TV doesn't play well with your gpu)
It’s because the answer is “just pick one”. Many people have already made the decision before you and unless you have special requirements. You’ll probably be fine.
There's also wails.io that is a much better alternative to Electron, imo, and uses native APIs for OS functionality. Added benefit: Wails is also cross platform.
This does ignore the fact that there's not one UI toolkit that you can target and that will work on all distros, and you certainly can't target Qt, GTK, ETK, etc. all at the same time. I've been making a cross platform GUI library and it's been a pain point on Linux (even though I use Linux!)
I've got an Electron app in the works which I'm planning to sell to Windows and Mac users (probably through the platform's app stores, using in-app purchases).
Is such a thing possible with the Linux market? Can I sell my software through stores? Is IAP a thing?
Saying that you should target all Linux distros while showing a picture with 30 different Linux distributions, most of them used by at most few hundreds of people, is so ironic
I recently tried to create an executable from a Python script, Mac and Windows worked wonderfully, on Linux I had massive problems when I tried to run it in a different system then it was packaged. Linux caused the most problems, in the end I decided it was not worth it, 99% use Windows or Mac anyway.
I like it, but I don’t think it speaks to the audience it wants.
Why not show a 12-15 line python app using customtkinter, wxwindows, qt, etc? It currently says something like:
—
To target all platforms, here is the first step! Choose gnome or KDE!
You can write apps for all distributions as a beginner if you choose the right cup!
Step 2: ???
Step 3: target all platforms by targeting elementary OS or touch!
—
Alternatively, a true and recent experience I had:
—
The other night, I thought, “I wonder if I could get a fonzie-like cartoon to popup as a taskbar-buddy, like adware in 2001.”
I looked up some examples, then had copilot do a sketch of the relevant pyqt5 api calls.
Ayyy… about 30 minutes later I had a PyQT5 app running in XFCE, and had extra time to make `fonz-ai` play “Aiiii.ogg”, and play the jump the shark music, when dismissed.
Testimonial: I didn’t think it would be that easy, esp without VB6/warez ;)
^This is the kind of app we need to believe in again^ :P
(PS: it worked in plasma and gnome too, because standards. Also archlinux/arcolinux.)
If you are going through the trouble of targeting all Linux distributions, why not just target all platforms by using a framework like Flutter or React?
I've seen few apps anymore whose performance is bounded it by the choice of UI framework.
If they're slow, it's usually because of design choices, like making common user interactions wait to retrieve data.
Yes, games are an exception. But for the majority of software that is concerned with displaying text, it doesn't matter a hill of beans if you use a cross platform framework. We're far past the days of slow Java AWT apps.
> Unlike other platforms, Linux is a very diverse target. There are hundreds of Linux distributions, some more popular than others. Once published though, applications can generally work everywhere.
> There are well documented software packaging and distribution systems which enable developers to get their applications into the hands of users.
> Each developer framework and Linux distribution will have their own recommended route to users. When you’re ready to share your creation, the development documentation will signpost their suggested packaging guides.
“Your apps will generally work everywhere” and “there are hundreds of distros and you have to do work to support each one individually” are essentially contradictory, no?
Most of them work the same way though. In reality there are like 3 big distributions to target, Arch, Debian and Fedora and from there it will trickle down. Linux and the *BSDs generally work the same, put the executable in the $PATH and you're good. Where this is may differ but not by much and these days the FHS is adopted pretty much everywhere I know.
I know in Linux they have fun things like snap and flatpak but it is really solving the problem using a bit of infrastructure and package management instead of doing it in the frameworks and programming languages themselves (which are what you are asking people to write apps in).