Hold on mate, isn't that what Java was supposed to solve. I remember before the days of electron when I was a wee lad in the 2000s, all cross platforms apps were Java.
Look at Ghidra, it's a Java app for Windows, Linux and Mac. The "holy trinity" of operating systems, covered with one language and framework.
So what happened? Did devs forgot Java exists and felt like reinventing the wheel but worse this time?
Java simply has a much higher barrier of entry. Not only in regards to figuring out the language and resources available but also the fact that creating a GUI still requires external dependencies.
Electron isn't just cross platform, it is cross platform based on technologies (html, css and javascript) that also by a huge margin have the largest amount of developers available.
> Not only in regards to figuring out the language and resources available but also the fact that creating a GUI still requires external dependencies.
What external dependencies does Java need that's not in the JDK itself? I have an app with Mac and Windows installers (and thus bundles JDKs), it also runs on Linux (via a fat jar), I tested it on Ubuntu, but for the life of me I couldn't figure out how to package it properly. It was more complicated that I cared to invest it at the time.
As for the barrier to entry, I feel the same way about the web. I find the JS and Web ecosystem to be utterly overwhelming, one reason I stuck with Java over something like Electron, and the installers/footprint of the app are smaller to boot.
For Linux, I'm using jpackage to package my Java software to .deb (x64 architecture) file. For all the other Linux variants, I've a .tgz file that contains the jar file, libraries and icons of the applications.
The problem I have with Linux is named at the end of the website: "Sharing your creation". It's pages and pages of documentation that is not relevant to the packaging of your application where you can spend hours of work without finding what you want of finding out that it doesn't work for you because for example it's not on GitHub.
Hopefully jpackage was able to fix it for the .deb format.
Instead of working on more documentation, working on better and easier to use packaging tool would help.
The JRE itself is an external dependency that you need to bunle because it is not part of most Linux distributions. And even if there is a JRE installed it is not guaranteed to be able to run your Java application.
> What external dependencies does Java need that's not in the JDK itself?
I mean that it doesn't come with Java itself, but you as a developer need to pick a UI framework and not all of them actually work all that well cross platform or will get you an actual modern interface.
Edit: I should also note that the threshold for entry I am talking about is for people just generally starting out. There simply are way more resources available for web related development than there are for java.
Also, when you start bundling your JDKs I am not sure you can talk about a smaller footprint anymore.
Well, Swing is still bundled with Java. Netbeans uses the "Flat Look and Feel" and looks ok to me. I find Swing a lot more work compared to FX.
JavaFX used to be bundled with Java, but was removed. Some JDK distributions bundle FX like it was before, and adding FX to a new project is simple and straight forward. Maven packages it nicely, and it includes the platform specific binary parts. If you can use Log4j, you can use Java FX. Onboarding onto FX is not a high bar.
I can not speak to SWT.
There's several examples of "modern" UIs in FX, I can't speak to any of them, I don't pay much attention to that space. It's imperfect compared to the web, but not impossible.
It was. Even before it was more "bundled" with the JDK than "part of Java".
But, to be honest, that's a real nit. It's a standalone dependency, it's 4 lines in a POM file, it doesn't drag the internet with it, and it only relies on the JDK. So, while it's a large subsystem, it's a "low impact" dependency in terms of side affects and complexity.
> it's a "low impact" dependency in terms of side affects and complexity.
I wish that were true in my experience. But we have struggled to support {macOS, Windows, Linux} x {x86_64, arm64} with JavaFX and one .jar for our application.
My point about 4 line dependency is to point out that the barrier to entry into FX is low. What you are doing I would consider unconventional, as demonstrated by all of the hoops you're jumping through to achieve it. Packaging, yes, is still a bit arcane at this point.
My project, https://github.com/willhartung/planet packages macOS and Windows installers, and can be run as a fat jar on a Linux machine (tested on Ubuntu). You can look in there to see my POM file, and my build scripts. They're much simpler than what you're doing. I don't have a package for Linux, as I mentioned earlier, it was just a bit to confusing to figure out Linux packaging for my tastes, so I punted. If there was crushing demand for it, I'd look into it deeper.
None of those artifacts are "cross platform". It's not a single artifact for all platforms, they are platform specific. I build the Mac one on my machine, and the Windows and Linux versions on VMs. Currently, the vision for Java distribution is to bundle the runtime with the application. Use jlink and the module system to narrow down your JRE, and jpackage to combine them into an appropriate, platform artifact. jpackage requires to be run on the host OSs. I do not have ARM versions of any of my code yet.
If you want to ship a cross platform jar, then it's probably worth your time to require a JDK with FX already installed. Azul does this, I think there are others. Then the FX, and it's platform specific binaries, are no longer your applications problem.
Also, there is a project, https://jdeploy.com that offers tooling and infrastructure to distribute native FX bundles, it even offers automatic updates. It will install its own JDK in its own directory structure to run your applications. If you have multiple applications, it will share the JDKs among them. It's quite clever, and perhaps worth considering depending on your requirements. I chose to not do that just to make my projects as simple as practical for the end user and myself.
I'll be fair, getting to this point was not drag and drop. jpackage and jlink can be fiddly to get started with. Documentation can always be better.
> What you are doing I would consider unconventional
It wasn't before JavaFX was removed from the Oracle JRE. That is my point. JavaFX used to be a trivial dependency, but now it is quite painful in otherwise identical configurations, definitely not "low-impact."
> If you want to ship a cross platform jar
We do. Isn't that the point of Java, "write once run anywhere"?
This program is also used as a library in autograders. We do not want to distribute 5 versions of each autograder for 2-4 assignments. The autograder should be distributed as 1 jar. Undergrad TAs are creating that jar and may not have knowledge of complex CI pipelines etc.
> then it's probably worth your time to require a JDK with FX already installed.
That is not appropriate here. This is an educational tool, and students are enrolled in other courses that use Java frequently. We should be able to use the same JRE that students already have installed — it is unreasonable to require installing a different third-party JRE to run a digital logic simulator. It also adds another hurdle for freshmen/sophomores who may not have a natural ability for juggling different JRE installations. (Source: We tried requiring Azul and it was painful for everyone.)
> I do not have ARM versions of any of my code yet.
We have >900 students in this class, so it is necessary to support M1/M2; in fact, a large portion of our students had M1/M2 laptops. It sounds to me like you could just provide a fat jar in your case, actually. Supporting aarch64 is where we hit problems with our fat jar[1], since the aarch64 native libraries have the same name as the x86_64 libraries.
To summarize my point: yes you can make the build/install process more convoluted and avoid this problem. But we have an installation flow that has been battle-tested by thousands of students for 13 years (download circuit simulator .jar and run it) we have no good reason to abandon. The combination of the arrival of M1/M2 and JavaFX getting yanked from the JRE has made supporting our existing (extremely reasonable) flow nothing close to "low-impact."
Makes sense. I worked a bit with Java years ago, but never with GUI stuff. Most of what I remember about it was drowning in boilerplate and being really good for coordinating a lot of developers around a big well-organized codebase. I probably couldn't write hello world from scratch without reference if I was being held at gunpoint.
> If you need to create a "just works without dependency b.s." experience in Java, you use the correct tooling for that, jlink.
At which point you are including a similar footprint as Electron does by shipping chrome. I mean, you must have realized I was talking about the inclusion of the JRE and whatever else is needed to make a java application run on a system as a standalone application.
So I am honestly not sure what you are arguing besides semantics.
when i see a java application i think, hmm, this is likely going to be bloated (but not necessarily) but for sure it's going to run.
if i want to create a cross platform application where i don't even have to think about testing on multiple operating systems, then java is going to be a serious contender.
and if i have to choose between an app written in java or electron, i'd probably pick the one in java.
so yeah, i don't understand what happened here either.
Java is great for making huge well-organized codebases with a lot of developers, especially if you've got good tooling support or a rich ecosystem of existing code to work with. Outside of that... If it was a good development ecosystem for native gui-based apps targeted at end users, why wouldn't the preponderance of native user-facing apps be written in Java, anyway? Ask nearly any experienced mobile app developer if they're more productive in Java on Android or Swift on iOS-- it's not even close. Sure, some of that is the OS itself, but a whole lot of it isn't. On the desktop, the one time I tried to make something with Swing I wanted to Fling my computer out the window. Clunky.
It’s about branding. Swing and JavaFX looks like other desktop app (aka not cool to a lot of designers). And it has a high barrier of entry (ever tried QT, AppKit or Win32). Electron is easy, but it’s shoehorning a document process to software interfaces.
Yeah the architecture for electron is absurd, but it's important to not relegate UI flexibility to mere aesthetics. For most of my career, I was a back-end web developer, but more recently I've done a lot of interface design after getting some formal education in it. The overwhelming majority of developers I've worked and interacted with conflate aesthetic and interface usability. Heck, even I did before I really started digging into it professionally. I think it's because applications that have experienced designers make a good, usable interfaces will also likely hire visual designers to do aesthetic/branding work, and especially in waterfall environments, developers get it all handed to them as a "design." And for many reasons I will (uncharacteristically) not rehash here, FOSS lacks both.
However, a good interface and a pretty interface are not the same thing-- both are communication mediums that communicate through the software interface, but visual/branding/identity designers communicate things to the user about the brand as a marketing device, and interface designers figure out how to communicate the software's features, status, output, etc. with the greatest efficiency and reduce unnecessary cognitive overhead. Branding and identity is a very specialized form of design that's usually done by design houses-- even huge companies with big teams of designers often contract this work out to specialists. They might go so far as to recommend certain animations for interaction, but you don't want them designing your interface. In small companies, the designer will probably have to implement their design to conform to the design document, but they're using tools like gestalt, alignment, color grouping and type to create information hierarchies, existing expectations for layout and functionality, etc. that tell the user what they need to know as effectively as possible, and how to act on that in the ways they need to.
A good example of the power of interface design is in many dark patterns. You can simply have a plain system-standard dialog box asking if a user consents to some creepy analytics that nobody really wants, but instead of "OK" and "Cancel" in their normal spots, put "Off" in bold letters where "Ok" would normally be, and "Consent" in non-bold letters where "Cancel" would normally be, and I'll bet you at least 60% of users would choose "Consent" having only skimmed the familiar pattern. That experience isn't branded or styled in any way-- it solely uses juxtaposition, pattern expectations, and context to influence users behavior.
When you've got an inflexible, counterintuitive UI kit that developers must fight with to get the results the interface designer carefully put together, you hurt the usability of that tool for end users a hell of a lot more than mediocre performance does. This is very counterintuitive for most developers because of the curse of expertise. We have a working mental model of how software works on the back end and consider the interface a tool to expose that functionality to the user. To users, the interface is the software, and if you're design is more informed by the way the software works under the hood than the way a nontechnical user thinks about solving the problem they're trying to solve, it's going to be very frustrating for everyone who isn't a developer. Developers like to talk about marketing as the primary reason commercial software is king, and it's definitely a factor, but developers aren't magically immune to marketing, and you can't get much more compelling than "Free." To most users, the frustration of dealing with interfaces designed by and (inadvertently) for developers is worse than paying for software--- hence FOSS nearly exclusively being adopted by technical people.
There are many factors influencing adoption, including prior experience (I'm using it at work) and network effects (that's what my friend use). What native controls offer is seeing the whole OS as one thing. But with the advent of branding in software interface, people are expected to relearn what a control is for each software (Spotify vs Music).
> When you've got an inflexible, counterintuitive UI kit that developers must fight with to get the results the interface designer carefully put together, you hurt the usability of that tool for end users a hell of a lot more than mediocre performance does.
I have not encountered a UI kit that does not expose the 2D context to create a custom UI. But designers always want to redo native controls instead of properly using them, creating only the necessary ones. I don't believe anyone can argue that Slack UI can't be better.
Common practices do not equate to widespread approval. Most developers don't like electron-- even the ones that build with them much of the time-- but they're everywhere. Designers' opinions are no more generalizable than developers opinions.
As someone who's studied and professionally practiced interface design, I can assure you that there's nothing magical about system UI elements to the vast majority of users. Developers often focus on that because managing them is such an important part of developing interfaces, and developing with them is way easier... but in design, it's a small slice of the components that make a real difference. A lot about usability, as is the case with any other communication medium, is extremely nuanced, and native UI kits suck for creating that nuance. It's usually possible, but once again, especially now that HTML/CSS/JS isn't the accessibility catastrophe that it used to be, the extra effort to get polished results using native stuff just doesn't pay off.
As a long time developer before I became a designer, I am intimately familiar with the sort of blind spots and misconceptions developers have about interface design. Having a working mental model of software in your head significantly shifts the way someone works with computers. Developers see interfaces as a way to expose application state, data and functionality to end users, but to nontechnical end users, the interface is the application. That is not a trivial distinction. Many things most end users prefer chafe most developers. Most things that developers prefer are absolutely unusable to most non-technical end users. And most importantly, most developers assume that their technical understanding makes them better at knowing how interfaces should be designed, when in my significant experience, it makes us worse at it. The curse of expertise obviously stymies in documentation and education-- they're the two most obvious communication mediums in software. Most developers don't even consider that the interface is the most visible and consequential communication medium in any GUI application, and going based on your gut instinct about what that should be works as well as going based on your gut instinct about making a tutorial for nontechnical users. It doesn't.
I'm not saying that native controls are better because they are native. Or electron is suffering from some defects that impair usability. With equal time and effort, a software built with native controls will be more usable. A random user will not be able to distinguish which is which, but I dare say that the native ones would felt better if the only difference is what is used to build the interface.
When designing native controls and using common patterns of the OS, you lessen considerably the amount of efforts required to learn that application for the user of the platform. Most non-technical users only use one platform. Creating the same interface for two or more platforms is impairing users on that platform. And I include the web as a platform.
The JRE itself is an external dependency that you need to bunle because it is not part of most Linux distributions. And even if there is a JRE installed it is not guaranteed to be able to run your Java application.
So yeah if you redefine your problem to "run on systems with the right JRE" then Java makes things "easy" (your program will still stick out like an unpolished turd). But if you can just require stuff like that than you can also require the right dependency versions for native programs.
Java is objectively terrible for writing good apps on modern personal computers. The one platform that did adopt it (android) had to practically rework the entire byte code and VM as well as the set of APIs for writing apps to make it work.
Well, so I can only tell you as much as I know and understand. Some of this pulls in some outdated information too.
So, JVMs and languages that abstract the underlying machine are always going to have overhead. The original interpreted stack-based JVM model is really bad for performance because you can't do great optimizations on the code because you can't have a great view of the operands that are being defined and then subsequently used, on top of that you have to either JIT or interpret code which also has overhead. This is why Android's original Dalvik VM originally started by converting the Sun byte code format to a register based format. So, now you have a format you can do some optimizations on: great. But you still depend on a VM to generate and optimize for native code: that means code-caches and that means using excess memory to store the fast optimized code you want to run (which could have been evicted, so more overhead when you have to regenerate). Next you have frameworks like the classic Swing in Java that were frankly implemented with priorities that did not include having a really great and responsive experience even though its platform agnostic as far as the way it draws widgets. These days we can take GPUs for granted to make this approach work, but a lot of the Java UI stuff came from another era.
I am not really sure if I am right here, but to me all this means that to have made the Java system work well for modern PCs and mobile it would have required a ton of investment. As it turns out, a lot of that investment went into the web and android instead of polishing Sun and Oracle's uh... product.
Java's also kinda been sidelined because for years Oracle threatened to sue anyone that dared fork it as Google had, and Microsoft kinda spent a decade making C# and .NET more confusing than it already was so theres that too.
I think it's hard to beat the tide that is the web as a content and app delivery system. The web is also getting all the billions in investment from every massive faang.
> So, JVMs and languages that abstract the underlying machine are always going to have overhead.
Well, so JavaScript and WebAssebly isn't that great either in the end?
> The original interpreted stack-based JVM model is really bad for performance because you can't do great optimizations on the code because you can't have a great view of the operands that are being defined and then subsequently used, on top of that you have to either JIT or interpret code which also has overhead.
What a paragraph. But it's kinda false.
WebAssembly, you know, is also a stack-based virtual machine.
Javascript might not be a stack-based virtual machine, but you're interpreting it every time you run it for the first time. How is that faster that bytecode? It isn't.
In fact, modern Javascript is fast specifically because it copies the same workflow of the Java HotSpot JIT optimizer - detect and compile code hot spots in native code, run that instead of VM code.
> This is why Android's original Dalvik VM originally started by converting the Sun byte code format to a register based format. So, now you have a format you can do some optimizations on: great. But you still depend on a VM to generate and optimize for native code: that means code-caches and that means using excess memory to store the fast optimized code you want to run (which could have been evicted, so more overhead when you have to regenerate).
Nope, that is totally not the reason. Dalvik was done because it was believed that you needed something that starts faster, not something that runs faster.
Those are 2 different optimization targets.
It was pretty known since the start of Dalvik that Dalvik had very poor throughput performance, from 10x to 2x worse that HotSpot.
The reason why we don't have Dalvik anymore on Android is that it also didn't start that much faster either.
That of course is not because register machines are worse either, but because nowhere near enough optimization work was done for register type VMs compared to stack type VMs in general.
> Next you have frameworks like the classic Swing in Java that were frankly implemented with priorities that did not include having a really great and responsive experience even though its platform agnostic as far as the way it draws widgets. These days we can take GPUs for granted to make this approach work, but a lot of the Java UI stuff came from another era.
Ok, but does your favorite, non-web GUI framework use the GPU, and use the GPU correctly at all?
Even on the web it's easy to "accidentally" put some extremely expensive CSS transformations and animations and waste a whole bunch of GPU power on little things.
> I am not really sure if I am right here, but to me all this means that to have made the Java system work well for modern PCs and mobile it would have required a ton of investment. As it turns out, a lot of that investment went into the web and android instead of polishing Sun and Oracle's uh... product.
You're mixing things here. "Sun products" were very expensive UNIX workstations and servers. Not things for your average Joe. Those very expensive Sun workstations and servers ran Java fine.
Java itself is a is very weird "Commoditize Your Complement" ( https://gwern.net/complement ) attempt to commoditize this exact very expensive hardware that Sun was selling.
From Sun. Marketed at very high expense by Sun. A self-inflicted self-own. No wonder Sun no longer exists.
> Java's also kinda been sidelined because for years Oracle threatened to sue anyone that dared fork it as Google had, and Microsoft kinda spent a decade making C# and .NET more confusing than it already was so theres that too.
C# not having nice GUI is another story, that of Windows-land never having anything above pure Graphics Device Interface being stable since forever.
You're living in the past. Applets and Flash lost against the HTML/JS/CSS stack and Oracle owned up to it. Applets are terminally deprecated now.
Edit: admittedly, one of the reasons for that was that the sandbox was indeed prone to security holes. Also, the developer ergonomy of the SecurityManager was unsatisfying for both JDK and app developers. Good riddance.
Golang's only consistent advantage over Java is lower latency on compilation, startup, and GC. OpenJDK will eventually level the playing field with Project Valhalla. In terms of FFI and language features Java has already caught up. And faster startup can be achieved with CRaC.
The crucial difference is that these technologies are embedded differently. Java Applets had access to dangerous APIs that had to be restricted by the SecurityManager. Also, the JVM was installed externally to the browser, turning it into an uncontrollable component which made the browser vulnerable in turn.
The newer technologies were designed from the beginning with a well-defined security boundary and are based on a language that was designed from the beginning to be embedded. Everything is implemented within the browser and can be updated together with it.
Look at Ghidra, it's a Java app for Windows, Linux and Mac. The "holy trinity" of operating systems, covered with one language and framework.
So what happened? Did devs forgot Java exists and felt like reinventing the wheel but worse this time?