However... "Low CPU and memory footprint (...) memory usage is under 20 MB for a Hello World program" : I am the only one who still thinks this is huge ? (-> https://tonsky.me/blog/disenchantment )
That blog post does speak to me, but 20MB for a program with a truly functional, modern UI doesn't outrage me that much.
IMO there's a tradeoff: we could be writing all our programs in C, still. But it would be enormously difficult and there'd be way more bugs. On the other end of the spectrum we can be lazy, use web tech everywhere and never optimise our ballooning JS codebases. This feels like it's at least somewhere in the middle.
OT but a bone to pick with that article:
> Modern text editors have higher latency than 42-year-old Emacs. Text editors! What can be simpler? On each keystroke, all you have to do is update a tiny rectangular region and modern text editors can’t do that in 16ms. It’s a lot of time. A LOT.
That isn't what my text editor is doing, though. It's doing autocomplete suggestions, linting code as I type... all sorts of things we never had a couple of decades ago and are huge productivity boosters. Sometimes I feel like people forget that.
However, most of that should be done in the background — it shouldn’t affect your actual text input speed. Auto-format is more along the lines of something that could be blocking, but we’re not dealing with latex problems as to be significant
Even putting that aside, just showing text on a screen is much slower on modern PCs then older ones, since you have a far more complicated graphics stack and latency at several added steps.
I remember an article a couple of years ago where someone rigged up a camera to measure key press to screen update on different machines and the results were eye opening.
Google Maps is a truly horrendous interface. It's pretty, it's well built, but it's fundamentally a horrible UX.
I find myself frequently bamboozled until I stop and try and determine which mode I'm in. Navigation behaves differently to browsing, which behaves differently to searching, which behaves differently to viewing an individual result. I'll be thrown from one mode to another and never feel in control of the app.
I hate it with passion. Clicking a photo on a café in the main view, horrendous because it does not actually open the photo, nor the overview on the café screen. Have to be on the photos screen for the photo to open up full screen.
Pressing on some place while being in other than top mode.
The back button experience.
And the bottom drawer, no idea when I should pull it up or down, or if I am currently in it.
One thing that's illuminating is go to chrome://settings/content/all and sort by "data stored" to see how much local storage websites use. Stuff like vice.com needing 100mb of space on your hard drive for who knows what purpose.
Wow, thanks for the tip, never occured to me to do that.
There were websites I had never heard of using hundreds of megs. Also acehardware.com for some reason using hundreds of megs.
Also Github "community" forums and Travis "community" forums (I don't use travis anyore) using hundreds of megs. Are some websites just caching the entirety of every page you look at in local storage? How rude.
Another fascinating look into text on computer screens is this article: https://gankra.github.io/blah/text-hates-you/. Just goes to show that text rendering is actually absurdly complex, unless you drastically restrict the problem space.
This sometimes makes me think we're still on the skeuomorphism phase of text. Its more and more complex to render realistic looking with proper illumination and textured faux leather for your UI until until you just admit you're rendering a ui on a screen and then you're back to colored rectangles. We're still trying to render ideas and words resembling handwriting and print press characters, until we embrace screens and render arial and images or even monospaced fonts which are perfectly readable (I do that all day long on my code editor)
I’m pretty sure that none of these latency benchmarks are showing that text input speed is affected. You can definitely input more than 1 character every 40ms in modern “slow” text editors.
I wrote 3D visualization apps in 1998 using FLTK and OpenGL and the (statically linked) binary was less than 20 MB. I think my desktop had all of 64MB of RAM. It was snappier than a modern TODO app on Electron and far, far easier to write.
Among other things we've done to ourselves, we've got higher-resolution displays, and we have GUI apps that can scale to different resolutions seamlessly. We have support for high DPI displays. We have fonts with sub-pixel rendering for sharper, easier to read text. Speaking of font-rendering, we have Unicode and internationalization support, so that people who read and write Arabic, Chinese, Japanese and other languages that don't use the Latin alphabet can use their native language in file names, dialog boxes and in anywhere else they might want to. We have better support for screen readers for the blind. For people who aren't fully blind, but have vision problems, we have the ability to make text and UI features larger dynamically to support them. We have better multitasking support, including process isolation to keep a badly-behaved application from crashing the entire computer. We have better security at the OS level to prevent malicious applications to take over the whole machine.
That's a big part of what we've done to ourselves. And this makes computers better for a whole lot of people.
How much time and expertise did it take? Did automatically work on all OSs? Could person without tech background slap something like this in a weekend? Did it have accessibility built-in? Could you reuse it on web?
Yes, in 1998 there were full fledged IDEs which allowed for GUI development. Delphi, Visual C++, Visual Basic Powerbuilder, NeXT Objects,.... I would actually say that it was easier to develop apps in 1998 than it is now.
As mentioned in the other comment, emacs does those things, and so does Vim (with plugins of course).
I moved from sublime to atom to VS code, but eventually settled on Vim because I was able to get the same features (that I used) while getting almost instant response. A feeling that has completely changed how much I enjoy writing any sort of text.
Hey, thanks for this. I'm a long-time Vim user, but I've never gotten around to adding in some slick IDE-like featuers (I've made half an attempt to get code completions working, but often lose interest if it doesn't work first time).
The intro looks great I will definitely check this out.
This thread has been specifically about responsiveness. Latency, not throughput. Also, the bit about no plugins was a little white lie. I really meant "no plugins for IDE-like functionality (language server, etc)". While many IDEs offer basic vim keybindings, I don't know any that would let me import my .vimrc wholesale and work exactly the same. I'd love an IDE that embeds neovim as the text editor.
Anyway, if you don't care about all that, you can get a similar effect by turning off intellisense (or equivalent) in your IDE while you write, then turn it back on at the end to get what you just wrote to compile. I do this sometimes in Android Studio.
There is an IDE that embeds neovim as a text editor, kind of.
The VSCode Neovim extension makes neovim run as its backend, while giving you all the IntelliSense etc of VSCode. I can’t tell exactly you how it affects responsiveness as I only toy around with it, but it does feel noticeably better in some aspects...yet maybe occasionally glitchy?
Anyway it’s pretty interesting, especially if you’re already using neovim anyway.
The VsVim plugin for Visual Studio makes an attempt at supporting everything in the .vimrc (or _vimrc) file. Compatibility is not 100%, so certain things just fail, but it's a lot more than just basic key bindings.
No. These things should happen concurrently, probably on different threads. If I quickly type “foo“, I don't need to do run autocomplete and code analysis between each character. That would be horrible.
Interesting how what was seen as slow and bloated back then become the opposite today. Eight Megabytes And Constantly Swapping, yep, back then, 8MB was unthinkably large for a text editor...
Another example is the Enlightenment window manager. It was considered a little heavy, but good looking. But because there was a large hiatus in development, it got "stuck in the past" and now, it is one of the lightest there is.
Well, with vim, it probably depends more on your terminal and tmux, while Emacs renders its own graphical frames.
But those wars are long over anyway, these days. One might just as well fight over whether the monolith on Earth's moon is better than the one on Europa, or vice versa.
I might sound like those weird language evangelists, but...
> we could be writing all our programs in C, still.
You don't need to use C, there are other languages. For example a hello world in Free Pascal[0] (a natively compiled language with no runtime or other dependencies, which supports object oriented programming and has RTTI rich enough to implement automatic object serialization, semi-automatic memory management, strings that know about their encoding, etc) is just 32KB.
Some time ago i wrote Fowl[1], a mostly complete recreation of the OWL toolkit that came with Turbo Pascal for Windows, the demo program of which is around 80KB.
Of course for a more realistic (and MUCH easier to use and develop with) approach, you'd need something like Lazarus[2]. A minimal application in Lazarus is 2.18MB. This might sound too big... and TBH it is, but the size doesn't grow too quickly from there. For example a profiler i wrote recently for Free Pascal applications is... 2.16MB (yes, smaller, why? Well, because i replaced the stupidly huge default icon with a smaller one :-P and without the default icon a minimal application is 2.05MB so the profiler added around 100KB of additional "stuff").
> It's doing autocomplete suggestions, linting code as I type... all sorts of things we never had a couple of decades ago and are huge productivity boosters
FWIW we had those, Visual Basic (or even QBasic) would format your code as you type it, Visual Basic 6 and Visual C++ 6 would profile auto-completion (VB6 even for dynamic stuff), etc. Only issue with C++ was that sometimes it wouldn't work around complex macros.
But modern editors do a bit more, still no excuse for being that sluggish. Lazarus does pretty much everything you'd expect from an IDE with smart code completion (e.g. things like declaring variables automatically, filling method bodies, etc) and code suggestions yet it runs on an original Raspberry Pi.
Now i'm not saying that you should not be using whatever you are using or that you should code on a Rasberry Pi or even to switch to Free Pascal / Lazarus (which honestly is far from being free of issues), but i think that you're overestimating what tools do nowadays and many people are so used to running slow and bloated software that take it for granted that things should be like that and cannot even imagine things being better.
Visual Assist Tomato wouldn’t have existed if Visual C++ did what you said. I use Rider (mostly) and I don’t even know how I’d program without all the features it adds. Auto import, code cleanup, code optimizations, memory allocation and boxing highlights, decompile assembly, Unity engine integration. I remember the days using Visual C++ and banging away on trying to get QT to not look ugly. I don’t miss anything about the development process from 15 years ago.
Visual Assist improves on what was already there, i never claimed that the functionality was the best it could have been (if anything i wrote the opposite) only that it existed.
But it is also an interesting thing to mention because in the last two C++ jobs i had where Visual Assist was preinstalled on my machine, i always disabled it because it was slowing down Visual Studio too much and the functionality VS provides is more than enough - VA does provide a bit more, but for me wasn't worth the slowdown.
Last time I used VAT was in 2013 and it wasn’t an issue on my i5 with an ssd. I would rather put my money into faster hardware to keep up with the demands of modern tools than live without them.
If you’re using Visual Studio for C++ I’d highly recommend Resharper C++. If you develop for Unreal Engine Rider for Unreal C++ is literally unreal, it makes me not hate writing Unreal C++ code.
I use C++ since 1993, moved into Visual C++ around version 6.0, and never used Visual Assist, or any of the JetBrains products that slow down Visual Studio to InteliJ levels of performance.
One of my key learnings with alternative languages is to always use the SDK tools from the platform vendor, everything else comes and goes, while playing catch up all the time.
I don’t understand the reasoning against modern tools under the moniker that they’re slow. If you can type out a class in a 10th the time but your ide is 40% slower (as a hypothetical impact) that is a net gain in output. In reality it’s not anywhere near a 40% slow down to use the features on computers made in the last 5 years. Anecdotal to this, I am a slowish typer (40 wpm) and because of this writing code was a long process for me. With modern tools I can produce a monstrous amount of code in a short amount of time.
Visual Studio has been pretty modern, specially when compared against traditional UNIX offerings.
Anyone measuring typing speed as productivity measurement is doing it wrong.
Writing code is around 50% of daily activities.
Visual Assist doesn't do nothing when I have to write documentation, architecture diagrams, meetings to decide roadmap items, demos at customer review meetings,....
On top of that, none of the OS SDK replacements offer better UI or debugging capabilities across the platform tooling, they just play "catch-me if you can" with what I can get on day 0 of each OS SDK release.
JetBrains wants to be Borland, yet they don't sell any of the platforms, or languages.
I guess Kotlin and Android marriage will help them, as they are trying to make it their "Delphi", lets see how it plays out if Fuchsia ever happens.
I don’t think JetBrains is going anywhere soon; been using their products for almost a decade.
I don’t measure my productivity by how much code I can write, that was just an example.
The way I work designing systems and architecture, I have already made the solution in my head and basically the “coding” part is just trying to get that info out as fast as possible. I have a similar thing to eidetic memory, but I am so ADHD what gets remembered can be random or missing stuff. I remember all code I’ve ever written, seen, or thought about and tools that allow me to basically brain dump this info greatly improve my production, leadership, confidence, and architectural designs.
It is about the feedback-response loop. If I type a character and it doesn't appear (what feels) instantaneous I start to feel physically sick. I have build up some tolerance but I think when I tried Julia with Atom three years ago, I gave up after 15 minutes (atom too much latency if I remember correctly and Julia as well)
Fermi guess: 60 wpm, that's one 5 character word per second, it probably doesn't make sense to give a new prediction more often than once per character -- that's 200ms to chew on autocomplete and linting and fancy animations. Meanwhile, please render the specific character out of your lookup table in 16ms, thank you.
Do modern graphics drivers for X or Wayland hand off font rendering to the GPU? They probably should -- it's maybe 100 or so small textures per font-selection, 150KB or less prerendered with 3-bit alpha, and maybe 100 loaded up into graphics RAM at a time -- 10MB is nothing, really.
they certainly could. Games can do 16ms frames, while calculating full physics simulations.
It's just that text editors of the modern day are programmed by people who prefer to not write it that way - mainly because it's quite hard, and the modern OS doesn't usually fit well into this framework of rendering for multi-tasking. And it takes more effort too.
Much easier to rely on a UI framework which adds overhead. The expectation is that the user probably won't care, and prefer that the software be more feature rich.
Help me build Cosmopolitan Libc. We're using modern compilers to build programs that are tinier and more portable than anything developers even as far back as the 70's or 80's were able to produce. https://justine.lol/cosmopolitan/howfat.html I built a LISP interpreter too, which makes Altair BASIC look bloated by comparison. https://github.com/jart/sectorlisp I will say that 20mb isn't too shabby if we judge the OP's project by Electron standards. If NodeGUI was pruned a bit more, it wouldn't be too far off from where Go is at right now for Hello World on the console. Although one does have to take into consideration that it assumes external dependencies are available such as V8 I assume? Does it statically build? One thing I'm curious about is I looked at the yarn lock file and I couldn't find Qt so I have no idea where it comes from.
8 bit BASICs were extremely tightly coded and packed a lot of functionality. They had working garbage collection for strings, with useful string processing built on it. Programs could be interrupted to safely return to the prompt, and errors which terminated the program informed the user of the line number, e.g. "illegal quantity error in 210". The internal, tokenized representation of programs could be interactively edited to update the program, which could be saved to disk. BASICS also had floating-point support, with functions like LOG, EXP, SIN, COS, TAN, ATAN, ... Also arrays, including multi-dimensional ones.
I see there is a metacircular evaluator in the repository (lisp.lisp), but I don't see where that is pulled into the image.
In the BASIC heyday, there existed compilers for BASIC (some of them written in BASIC). These didn't ship in the ROM images either, they were third-party apps.
I can't remember the precise mechanism, but IIRC the base NodeGUI pkg downloads and builds a minimal QT, and then links into it using NodeGUI/qode which hooks up the Node and Qt event loops.
If we're only interested in one/"not-Mac/Win/FreeBSD" of those platforms, how small could we get programs down to? Is forgoing certain platforms supported?
Also, there was a really fantastic question asked in the last Cosmopolitan thread, but it wasn't answered; what's the answer to it?
If you uncheck the boxes on the "how fat" page I linked above, then static binaries for a single platform usually end up being 4kb. Most of that 4kb is padding nops to page size. The question you linked is now answered.
Off topic but what did you use for emulating and visualizing your bootsector (blinkenlights.com) in here [0]. Looks much more convenient then QEMU which is what I used last time when doing ring 0 stuff in x86.
@tonsky is right about the huge memory footprint (in terms of RAM) but 20MB for Hello World GUI program is totally fine today [0]. In my humble experience there is really hard to fit in below 10MB with a GUI app that only displays "Hello World" text, while the next 10MB are often consumed for extra resources like font, rendering cache or custom textures.
It might a bit rude to remind that, but the same person saying aforementioned words about huge memory usage promotes the Skija [1], Java-based GUI toolkit, which is even worse in memory consumption than electron (jb-compose provided examples, even in release builds).
That actually sounds kinda impressive, considering [0] says a minimal standard Qt hello world app is ~5mb and stock node.js is ~100MB.
I'm curious what kind of magic is happening where node.js is in the picture yet the executable comes out at ~20MB. Is it simply a bundle of wrappers for most of Qt with Node.js being required to be installed separately? Is it purely UPX compression?
Poked a bit more. It seems the 20Mb claim is indeed mostly only about runtime memory, and doesn't translate to equivalent disk usage.
The entry point in ther starter kit calls qode (a fork of node), which talks to qt via napi (node's API for c++ bindings). It can be distributed as binaries via the *-deployqt toolchains.
So for distributable binary size in disk, we're probably looking at something in the ~100mb range with no UPX shenanigans.
So, tl;dr not as magical as I initially assumed. The trade-off between "bloat" vs ability to use web paradigms feels like a reasonable one. Overall pretty cool stuff.
A couple of years ago we switched an Electron/Cordova application to the native OS Webview (WKWebView on macOS and iOS, etc). The UI was written with Inferno + MobX and the backend in Swift/C#/Kotlin depending on the platform.
One thing is, as someone who develops cross-platform software, you really want font rendering & unicode support to be exactly the same across platforms, so that users don't have a save file that looks one way when opened on a Mac and another way when opened on a Windows.
This means that you can't use the OS provided facilities (as they all have different metrics) and have to bundle good font rendering engines (if you care about non-us-ascii people) - that means already ~7-8 megabytes at the bare minimum for that.
20 MB is less than 0.25% of my desktop machine's memory, and 2% of a 2010-era netbook. 20 MB, while much larger than what it has to be, is tiny even by the standards of decade-old computers.
RAM is cheap and plentiful. If you don't use it, its value is almost zero (the "almost" comes from OS-level caching of files and CPU-level cache misses of code).
This is ridiculous. 20 MB is so far from "all of it" on any modern computer that it makes me question if you're using something from the 90's. How are you even on Hacker News?
The reduction of the complex trade-off between memory use, CPU, disk, development environment, ease of deployment, and the dozen other variables that go into a choice like picking what toolkit to use to "don't be a dick" is so absurdly simplistic. It's a trade-off - not a single-axis "good or bad" decision.
Moreover, 20 MB of memory usage is going to be an acceptable trade-off for the majority of HN users, who skew webdev, not embedded.
> This is ridiculous. 20 MB is so far from "all of it" on any modern computer that it makes me question if you're using something from the 90's.
If I look at today's top seller computers in Amazon for my country (France, 6th economic power in the world), the top two models both come with 4G of RAM (a chromebook, and a win10).
The windows one will already use ~2gigs just for the OS. That leaves 2 gigs of RAM for your apps.
And that's for computers being sold today - they will still be in use in five years.
Meanwhile, you can fit 100 20MB apps into 2GB. I've never seen a normal desktop user use more than 10 graphical applications at once. I've never used more than 20 at once myself.
It's pretty clear that 2O MB of RAM as baseline memory consumption for a graphical application (we're not talking about a runtime that might be used for a bunch of background processes - that would be an issue) is a non-issue for the vast majority of users of such programs.
But if a browser like chrome already uses 1.5 gigs there will be a big difference between the app that uses 500 (your average electron app) and the app that uses 50 (your average Qt, GTK, ex, fltk... app). One will swap and make the whole system slow, the other not.
Conversely, to the end user, if they could pick a bit more resources usage (knowing they can close your app, or heaven forbid uninstall it) versus an extra feature what should the dev team prioritise?
> RAM is cheap and plentiful. If you don't use it, its value is almost zero (the "almost" comes from OS-level caching of files and CPU-level cache misses of code).
I have fond memories of my DOS days and the simplicity inherent in a single tasking environment, however nowadays operating systems allow for more than a single application to run at the same time and each application should play nice with the system resources, so even if RAM is cheap and plentiful it doesn't automatically mean that every application should feel entitled to it.
(and also even during the DOS days we had TSRs which had to be RAM conscious too)
>and each application should play nice with the system resources, so even if RAM is cheap and plentiful it doesn't automatically mean that every application should feel entitled to it.
Yes, you shouldn't be a bad neighbor, but the OS will generally move things around to accommodate you as necessary. RAM is an afterthought for most of these applications for a reason.
This attitude is like buying a sports car and never redlining it.
> Yes, you shouldn't be a bad neighbor, but the OS will generally move things around to accommodate you as necessary.
I'm not sure what you mean with that. The OS will not "move things around" to the point where the resource abuse wont be noticeable, all it can do is swap stuff to the disk, perhaps compress some RAM and maybe unload any cold code (though code doesn't that that much RAM) and all that take time, slowing down the system.
> RAM is an afterthought for most of these applications for a reason.
Yes and that reason is disinterest from the application developers for RAM usage.
> This attitude is like buying a sports car and never redlining it.
Sports cars have nothing to do with this, i do not see the relevance.
At least on Linux swap is a great tool for shuffling things you probably statistically wont use again out while retaining the ability to transparently recall them if it turns out your system guessed wrong.
If you find yourself in a situation where your OS is actually shuffling things around for your system to function you will find your performance and desktop experience has gone to absolute dog shit. It's entirely likely that the user will actually hard reboot the machine because they conclude it has frozen.
The absolutely only way to have a nice desktop experience in Linux is to ensure you have enough ram for all the things you intend to run at once which means have at least 8GB-16GB of RAM and don't run too many app once from people who think unused RAM is wasted RAM.
> don't run too many app once from people who think unused RAM is wasted RAM
The baseline memory usage of Svelte NodeGUI is 20 MB. 400 instances of that can fit into 8 GB of RAM. Don't you even try to tell me that you've run 400 separate GUI applications at once.
Let me repeat it again: unused RAM is wasted RAM. This is a fact. It does nothing when neither you nor the OS is using it - and the value of the OS using a byte of RAM for caching is tiny compared to the value of you using it for an application you care about.
The above also has nothing to do with wasting RAM. If you've spent any significant amount of time developing programs for actual users (read: not programmers), you'll know that development is a complex, multi-variable tradeoff - and one of the biggest trade-offs is RAM usage for performance, so if you solely optimize for minimal RAM usage, you'll always (except for the most trivial of programs written specifically as a counterexample to this claim) end up sacrificing performance.
The wastefulness of 1 GB of RAM usage varies wildly depending on whether you're running a video editing program on a large file (hey, that's not that bad!) or a simple textual chat application. 20 MB for a graphical tool is an acceptable tradeoff in the vast majority of use-cases.
Any time you actually say this delete the sentence if you want anyone to actually read what you are saying. I made no assertions specifically about NodeGUI. The idea I was responding to is
> the OS will generally move things around to accommodate you as necessary
Because this isn't accurate performance goes to hell when applications contend for ram. If you haven't noticed it you probably have enough ram to not have that issue not because your OS "moved stuff around" at least on windows/linux I have never owned a mac.
> Most users have enough RAM unless they're running on some absurdly low 4GB< device, which is just nowhere near as common these days.
Citation needed. My experience with people outside the tech bubble is that they don’t know what RAM is, and will not consider it when purchasing a computer. Most of these people now also live primarily on a phone or tablet too, because their computers are too slow.
In case this discussion is still about a 20 MB RAM NodeGUI application, I don't think this is a comparable situation because DOS was really not about developing cross-platform applications with a Turing complete styling language. It was about building text mode tools that ran on x86 and nothing else. I swear even the "bloated" Windows 10 will have absolutely no trouble running a Win32 Console application at < 1 MB RAM consumption but that's not what this is about.
My reference of DOS was tongue-in-cheek and i made it because it only allowed a single program to run (ignoring TSRs) at any time so programs using all the memory wasn't much of a problem.
Those don't seem contradictory to me. RAM is cheap and plentiful, and getting 64GB of RAM is easy, but Apple just doesn't want to sell that config right now. You can still get a 64GB config on another system.
But the value is also zero if you're using extra for no benefit. Actually it's negative, because it prevents other programs from using it.
You should try to use all your RAM, yes, _but in ways that are actually useful_. The OS can always use leftover RAM for caching frequently used files if you don't have a better use for it.
It's probably to be put against Electron HelloWorld resources usage. If this project manages to cut it in half then it's a good step. Maybe competition will cut it in half once more. Considering the amount of Electron based apps around it can save quite a few GB worldwide :)
Aren't you also listing all the non Qt libs here ? X11, libinput, pulse audio, dbus, gstreamer, OpenSSL... e.g. here on linux for a static build of Qt (which links against both X11 and wayland platform plug-ins, and uses an embedded libpng, harfbuzz, freetype, etc...) a QWidget hello world comes out at 14 megabytes.
> It's like comparing Java application without taking JVM into account.
no, it's not. The "equivalent" of the JVM would be the libQt5{Core,Gui,Widgets...}.so ; but here we are talking about the operating-system-provided libraries, so it is like comparing with a Java application, with the JVM, and also for instance all the MS Windows system libraries, or all the macOS frameworks like CoreFoundation, etc etc.
OK, so in this context the app is X MB. If all distros link those libs with libre cote and ui, for all intents and purposes, it takes X MB. If some dont, then lets discuss it (but if you can link to gstreamer and openssl to build a hello world, the issue is not entirely on the distro shoulders)
The thing is nobody actually uses hello world programs, by the time you build a real application the difference in RAM usage between that and a good Electron app for example blurs significantly.
What some people actually build with Electron doesn't really say much about what people _can_ build with Electron.
You can't just compare Emacs with Spotify here, I can't even scroll a list in Spotify without seeing it disappearing on me momentarily, that says more about Spotify's engineers or project managers than it says about Electron.
Teams and Slack kind of address the same problem and I'm seeing wildly varying numbers reported by you, without knowing anything about how you are using those apps those numbers are meaningless in my opinion, and if you think they are meaningful then clearly you can achieve different results despite addressing the same use case with the same technology stack.
Also there's "chat" and "chat", there's a reason ~nobody uses IRC anymore compared to Slack, they are not the same thing, and it's not just that Slack is easier to use.
> What some people actually build with Electron doesn't really say much about what people _can_ build with Electron.
no, it does. anyone can build incredible apps on any tech, given infinite budget and time. What matters is how the average app behaves, and for electron it is much worse than the average Qt app for instance.
That argument makes no sense to me, do you think you'd change your mind on Electron if access to it were restricted only to very intelligent and motivated people who made very good apps then?
The average Qt app would probably be close to the average Electron app if Qt attracted the same kinds of people, i.e. Electron is basically just easier to use and/or the developers picking it think they are getting more value out of it.
> That argument makes no sense to me, do you think you'd change your mind on Electron if access to it were restricted only to very intelligent and motivated people who made very good apps then?
but that's not the world we live in - everything has to be considered in that context and not in the abstract, in order to make any sense. Consider musical instruments - you can technically make great music with literally anything. But if, say, 80% of what people are doing with a given instrument ends up sucking, the problem lies more in the instrument than in the people, even if a very talented (and dedicated) 20% is able to make symphonies with it.
Just opened it up and it's... ok. Have 4 files open (all from the same project):
- VSCode itself 453MB
- cpptools (language server under the hood) 573MB
In the mean-time, I've been doing most of my work all day in Emacs and it has bloated up to 126MB :).
Edit: I will admit that the cpptools stuff is very nice. I do most of my C++ work in Emacs, but when I'm dealing with weird template type stuff I'll switch over to VSCode for the nice affordances it offers.
Edit 2: Just tried using VS Code to debug something that I can't quite grok.
IntelliSense process crash detected.
IntelliSense process crash detected.
IntelliSense process crash detected.
I can't speak about the LSP, other than that that sounds about right to me, maybe you have a lot of free ram or some extensions installed? I would expect the memory usage to be lower than that otherwise.
You can't really compare Emacs with VSCode here, among other things you can basically get vscode to run in the browser without many problems etc. It's like saying that one can edit text with nano which is 150kb, sure but that's only part of what vscode provides you.
Heh the embarrassing thing for Teams in this case is that I’m signed into one team that doesn’t really use it for anything but conference calls, while I’m also signed into 8 or 9 heavily used Slack groups.
> Electron does not provide that much more desired features than apps from 25 years ago.
That's true, but advanced GUI features aren't Electron's selling point. It's used because it offers easy portability and the ability to leverage web-dev skills.
SDL is intended for gaming and similar applications, it isn't intended for general purpose GUI development. That's what toolkits like Qt and GTK are for. These toolkits do compete with Electron.
With the disclaimer that I don't know a lot about this: Electron has solid support for both desktop and mobile targets. I don't think Qt's mobile support is as good, but I might be mistaken.
Accessibility would be my immediate thought, although I'm not massively experienced in terms of GUI programming, so I could be off the mark here.
With something like SDL2, I would have to interface with each operating system's accessibility API directly, in such a way that's not easily portable, unless I bring in a separate library. Even something like GTK has this problem on platforms other than Linux. With Electron, I know that my program will be accessible wherever Chromium's rendering of HTML is accessible, which is a lot more places than anything I'm going to be able to bodge together.
You can make a very decent Electron app with roughly (roughly = ~2x or less) that amount of RAM too if written well, for that you get pretty much a codebase that you can use everywhere, which on it's own is _massive_.
I've noticed Signal Desktop only uses 100mb. That is quite good for an electron app. A simple server-side node process will use at minimum ~20mb, which obviously does not include a browser.
It's open source, I've been meaning to poke around and see what they are doing differently.
Signal desktop runs as multiple processes. Did you add up all their memory consumption? On my Debian laptop, Signal processes consume about 700 MB of RSS in total, immediately after starting.
There's no secret really, just don't import junk dependencies like most people do and spend some time inspecting your memory usage every now and then, with very little effort you'll probably cut down on your memory usage significantly by doing that. With a lot more effort often you can probably even make something faster than uses less memory than a supposedly native app (if they spent less time than you optimizing it).
I’m less concerned about the footprint and more about security.
It can be assumed that anything running on the desktop has or will have vulnerabilities. The rise of web applications has been partially due to the assumption of great sandboxing.
I look forward to this project doing well, but it’s not the first time I’ve seen an electron competitor on HN promoting it being Node-based. Node isn’t sandboxed by default.
Another commenter brought up the idea of porting it to Deno – I’m not sure how inter-compatible the two are, but it provides a hopeful future direction to facilitate sandboxing.
However... "Low CPU and memory footprint (...) memory usage is under 20 MB for a Hello World program" : I am the only one who still thinks this is huge ? (-> https://tonsky.me/blog/disenchantment )