Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That blog post does speak to me, but 20MB for a program with a truly functional, modern UI doesn't outrage me that much.

IMO there's a tradeoff: we could be writing all our programs in C, still. But it would be enormously difficult and there'd be way more bugs. On the other end of the spectrum we can be lazy, use web tech everywhere and never optimise our ballooning JS codebases. This feels like it's at least somewhere in the middle.

OT but a bone to pick with that article:

> Modern text editors have higher latency than 42-year-old Emacs. Text editors! What can be simpler? On each keystroke, all you have to do is update a tiny rectangular region and modern text editors can’t do that in 16ms. It’s a lot of time. A LOT.

That isn't what my text editor is doing, though. It's doing autocomplete suggestions, linting code as I type... all sorts of things we never had a couple of decades ago and are huge productivity boosters. Sometimes I feel like people forget that.



However, most of that should be done in the background — it shouldn’t affect your actual text input speed. Auto-format is more along the lines of something that could be blocking, but we’re not dealing with latex problems as to be significant


Even putting that aside, just showing text on a screen is much slower on modern PCs then older ones, since you have a far more complicated graphics stack and latency at several added steps.

I remember an article a couple of years ago where someone rigged up a camera to measure key press to screen update on different machines and the results were eye opening.

edit: found it http://danluu.com/input-lag/. The Apple ][ ties for first place with an iPad Pro.


Related, my favourite thread to come out of the queer tech circles:

"Almost everything on computers is perceptually slower than it was in 1983" https://threadreaderapp.com/thread/927593460642615296.html


Google Maps is a truly horrendous interface. It's pretty, it's well built, but it's fundamentally a horrible UX.

I find myself frequently bamboozled until I stop and try and determine which mode I'm in. Navigation behaves differently to browsing, which behaves differently to searching, which behaves differently to viewing an individual result. I'll be thrown from one mode to another and never feel in control of the app.


I hate it with passion. Clicking a photo on a café in the main view, horrendous because it does not actually open the photo, nor the overview on the café screen. Have to be on the photos screen for the photo to open up full screen.

Pressing on some place while being in other than top mode.

The back button experience.

And the bottom drawer, no idea when I should pull it up or down, or if I am currently in it.


One thing that's illuminating is go to chrome://settings/content/all and sort by "data stored" to see how much local storage websites use. Stuff like vice.com needing 100mb of space on your hard drive for who knows what purpose.


Wow, thanks for the tip, never occured to me to do that.

There were websites I had never heard of using hundreds of megs. Also acehardware.com for some reason using hundreds of megs.

Also Github "community" forums and Travis "community" forums (I don't use travis anyore) using hundreds of megs. Are some websites just caching the entirety of every page you look at in local storage? How rude.


> The Apple ][ ties for first place with an iPad Pro.

* When used with an Apple Pencil (30ms). When used with touch it drops to 70ms.

Also, it's worth noting that there's a very limited list of devices tested there and it heavily skews Apple.


Are you referring to this submission? https://news.ycombinator.com/item?id=23369999 Outstanding article. Total shame it only got 100 upvotes.


Another fascinating look into text on computer screens is this article: https://gankra.github.io/blah/text-hates-you/. Just goes to show that text rendering is actually absurdly complex, unless you drastically restrict the problem space.


This sometimes makes me think we're still on the skeuomorphism phase of text. Its more and more complex to render realistic looking with proper illumination and textured faux leather for your UI until until you just admit you're rendering a ui on a screen and then you're back to colored rectangles. We're still trying to render ideas and words resembling handwriting and print press characters, until we embrace screens and render arial and images or even monospaced fonts which are perfectly readable (I do that all day long on my code editor)


And then comes unicode and all the simplicity is gone again as the rest of the world uses more than 26 letters.


> until we embrace screens and render arial and images or even monospaced fonts that are perfectly readable

This is part of what I meant when I said "drastically restrict the problem space".



I’m pretty sure that none of these latency benchmarks are showing that text input speed is affected. You can definitely input more than 1 character every 40ms in modern “slow” text editors.


The question isn't about throughtput but latency.


I wrote 3D visualization apps in 1998 using FLTK and OpenGL and the (statically linked) binary was less than 20 MB. I think my desktop had all of 64MB of RAM. It was snappier than a modern TODO app on Electron and far, far easier to write.

What on earth have we done to ourselves?


Among other things we've done to ourselves, we've got higher-resolution displays, and we have GUI apps that can scale to different resolutions seamlessly. We have support for high DPI displays. We have fonts with sub-pixel rendering for sharper, easier to read text. Speaking of font-rendering, we have Unicode and internationalization support, so that people who read and write Arabic, Chinese, Japanese and other languages that don't use the Latin alphabet can use their native language in file names, dialog boxes and in anywhere else they might want to. We have better support for screen readers for the blind. For people who aren't fully blind, but have vision problems, we have the ability to make text and UI features larger dynamically to support them. We have better multitasking support, including process isolation to keep a badly-behaved application from crashing the entire computer. We have better security at the OS level to prevent malicious applications to take over the whole machine.

That's a big part of what we've done to ourselves. And this makes computers better for a whole lot of people.


Yeah, no.

First, all those things sans high DPI functionality existed in 1998.

Second, most of Electron apps don't benefit from these theoretical advancements.

Preemptive multitasking existed in 1998 and worked every bit as well as it does today. Even in MS Windows.

Security, nah. We have more attack vectors than at any point in the past. And just more crapola caked on to "protect" against those.


How much time and expertise did it take? Did automatically work on all OSs? Could person without tech background slap something like this in a weekend? Did it have accessibility built-in? Could you reuse it on web?


Yes, in 1998 there were full fledged IDEs which allowed for GUI development. Delphi, Visual C++, Visual Basic Powerbuilder, NeXT Objects,.... I would actually say that it was easier to develop apps in 1998 than it is now.


Easier to write? I’m curious are there any sample apps like this that I can read?


Sure. Check out the tutorial that has remained pretty much the same since I used it to learn FLTK + GL 23 years ago:

https://www.fltk.org/doc-1.3/fluid.html

You pretty much draw the UI in fluid and fill out the callbacks. Yeah it's C++ but it's not particularly esoteric C++.


> IMO there's a tradeoff: we could be writing all our programs in C, still.

No need for that - you could use Zig or Rust. Though both need a better GUI frameworks, but it's certainly more efficient with them.


As mentioned in the other comment, emacs does those things, and so does Vim (with plugins of course).

I moved from sublime to atom to VS code, but eventually settled on Vim because I was able to get the same features (that I used) while getting almost instant response. A feeling that has completely changed how much I enjoy writing any sort of text.


I tried to move from VS Code to NeoVim with NerdTree and some other plugins to make it more IDE-like.

But eventually wasn't able to get the same code completion, and searching was also a bit more painful.

How do you go about that? Would you mind sharing your setup? :)


Hi, I use Neovim with basically the same features as VSCode. I made a video series on how to configure it: https://youtu.be/CcgO_CV3iDo?list=PLu-ydI-PCl0OEG0ZEqLRRuCrM...


Hey, thanks for this. I'm a long-time Vim user, but I've never gotten around to adding in some slick IDE-like featuers (I've made half an attempt to get code completions working, but often lose interest if it doesn't work first time).

The intro looks great I will definitely check this out.


Wow looks super promising. Definitely worths a try.

Thanks for sharing!


Write code in vim with no/few plugins. Don't worry about getting the variable names right. Then move to your IDE to get it to compile.

It's conceptually similar to sketching out the design of your code with a pen and paper or whiteboard. First write it quickly, then make it correct.


That's what you do? Why?


How is writing something in vim faster? You can have the same bindings in many IDEs.


This thread has been specifically about responsiveness. Latency, not throughput. Also, the bit about no plugins was a little white lie. I really meant "no plugins for IDE-like functionality (language server, etc)". While many IDEs offer basic vim keybindings, I don't know any that would let me import my .vimrc wholesale and work exactly the same. I'd love an IDE that embeds neovim as the text editor.

Anyway, if you don't care about all that, you can get a similar effect by turning off intellisense (or equivalent) in your IDE while you write, then turn it back on at the end to get what you just wrote to compile. I do this sometimes in Android Studio.


There is an IDE that embeds neovim as a text editor, kind of.

The VSCode Neovim extension makes neovim run as its backend, while giving you all the IntelliSense etc of VSCode. I can’t tell exactly you how it affects responsiveness as I only toy around with it, but it does feel noticeably better in some aspects...yet maybe occasionally glitchy?

Anyway it’s pretty interesting, especially if you’re already using neovim anyway.


The VsVim plugin for Visual Studio makes an attempt at supporting everything in the .vimrc (or _vimrc) file. Compatibility is not 100%, so certain things just fail, but it's a lot more than just basic key bindings.


Stop trying to use vim like your IDE. Vim doesn't have shortcuts, it has a whole language of commands that can freely be combined.


The vim plugin CoC uses the same engine for code completion as Code. It’s pretty trivial to set up.

For searching, use SilverSearcher / fzf


Yeah. LSP has changed a lot for me.

For C/C++, could use QT Creater or KDevelop too, which I think are pretty good in latency.


> That isn't what my text editor is doing, though. It's doing autocomplete suggestions, linting code as I type... all sorts of things

BEFORE the typed character shows up? Are you sure?

16ms should be an absolute upper bound for a character to show up. Everything else comes after.


Before the second character is typed...


No. These things should happen concurrently, probably on different threads. If I quickly type “foo“, I don't need to do run autocomplete and code analysis between each character. That would be horrible.


Emacs does those things too, these days. It's still snappier than anything else I've tried recently.


Interesting how what was seen as slow and bloated back then become the opposite today. Eight Megabytes And Constantly Swapping, yep, back then, 8MB was unthinkably large for a text editor...

Another example is the Enlightenment window manager. It was considered a little heavy, but good looking. But because there was a large hiatus in development, it got "stuck in the past" and now, it is one of the lightest there is.


cough vim cough

I'm just kidding. Don't want to start a fight here.


Well, with vim, it probably depends more on your terminal and tmux, while Emacs renders its own graphical frames.

But those wars are long over anyway, these days. One might just as well fight over whether the monolith on Earth's moon is better than the one on Europa, or vice versa.


>all sorts of things we never had a couple of decades ago

We had them.


I might sound like those weird language evangelists, but...

> we could be writing all our programs in C, still.

You don't need to use C, there are other languages. For example a hello world in Free Pascal[0] (a natively compiled language with no runtime or other dependencies, which supports object oriented programming and has RTTI rich enough to implement automatic object serialization, semi-automatic memory management, strings that know about their encoding, etc) is just 32KB.

Some time ago i wrote Fowl[1], a mostly complete recreation of the OWL toolkit that came with Turbo Pascal for Windows, the demo program of which is around 80KB.

Of course for a more realistic (and MUCH easier to use and develop with) approach, you'd need something like Lazarus[2]. A minimal application in Lazarus is 2.18MB. This might sound too big... and TBH it is, but the size doesn't grow too quickly from there. For example a profiler i wrote recently for Free Pascal applications is... 2.16MB (yes, smaller, why? Well, because i replaced the stupidly huge default icon with a smaller one :-P and without the default icon a minimal application is 2.05MB so the profiler added around 100KB of additional "stuff").

> It's doing autocomplete suggestions, linting code as I type... all sorts of things we never had a couple of decades ago and are huge productivity boosters

FWIW we had those, Visual Basic (or even QBasic) would format your code as you type it, Visual Basic 6 and Visual C++ 6 would profile auto-completion (VB6 even for dynamic stuff), etc. Only issue with C++ was that sometimes it wouldn't work around complex macros.

But modern editors do a bit more, still no excuse for being that sluggish. Lazarus does pretty much everything you'd expect from an IDE with smart code completion (e.g. things like declaring variables automatically, filling method bodies, etc) and code suggestions yet it runs on an original Raspberry Pi.

Now i'm not saying that you should not be using whatever you are using or that you should code on a Rasberry Pi or even to switch to Free Pascal / Lazarus (which honestly is far from being free of issues), but i think that you're overestimating what tools do nowadays and many people are so used to running slow and bloated software that take it for granted that things should be like that and cannot even imagine things being better.

[0] https://www.freepascal.org/

[1] http://runtimeterror.com/tech/fowl/

[2] https://www.lazarus-ide.org/


Visual Assist Tomato wouldn’t have existed if Visual C++ did what you said. I use Rider (mostly) and I don’t even know how I’d program without all the features it adds. Auto import, code cleanup, code optimizations, memory allocation and boxing highlights, decompile assembly, Unity engine integration. I remember the days using Visual C++ and banging away on trying to get QT to not look ugly. I don’t miss anything about the development process from 15 years ago.


Visual Assist improves on what was already there, i never claimed that the functionality was the best it could have been (if anything i wrote the opposite) only that it existed.

But it is also an interesting thing to mention because in the last two C++ jobs i had where Visual Assist was preinstalled on my machine, i always disabled it because it was slowing down Visual Studio too much and the functionality VS provides is more than enough - VA does provide a bit more, but for me wasn't worth the slowdown.


Last time I used VAT was in 2013 and it wasn’t an issue on my i5 with an ssd. I would rather put my money into faster hardware to keep up with the demands of modern tools than live without them.

If you’re using Visual Studio for C++ I’d highly recommend Resharper C++. If you develop for Unreal Engine Rider for Unreal C++ is literally unreal, it makes me not hate writing Unreal C++ code.


I use C++ since 1993, moved into Visual C++ around version 6.0, and never used Visual Assist, or any of the JetBrains products that slow down Visual Studio to InteliJ levels of performance.

One of my key learnings with alternative languages is to always use the SDK tools from the platform vendor, everything else comes and goes, while playing catch up all the time.


I don’t understand the reasoning against modern tools under the moniker that they’re slow. If you can type out a class in a 10th the time but your ide is 40% slower (as a hypothetical impact) that is a net gain in output. In reality it’s not anywhere near a 40% slow down to use the features on computers made in the last 5 years. Anecdotal to this, I am a slowish typer (40 wpm) and because of this writing code was a long process for me. With modern tools I can produce a monstrous amount of code in a short amount of time.


Visual Studio has been pretty modern, specially when compared against traditional UNIX offerings.

Anyone measuring typing speed as productivity measurement is doing it wrong.

Writing code is around 50% of daily activities.

Visual Assist doesn't do nothing when I have to write documentation, architecture diagrams, meetings to decide roadmap items, demos at customer review meetings,....

On top of that, none of the OS SDK replacements offer better UI or debugging capabilities across the platform tooling, they just play "catch-me if you can" with what I can get on day 0 of each OS SDK release.

JetBrains wants to be Borland, yet they don't sell any of the platforms, or languages.

I guess Kotlin and Android marriage will help them, as they are trying to make it their "Delphi", lets see how it plays out if Fuchsia ever happens.

https://blog.jetbrains.com/kotlin/2011/08/why-jetbrains-need...

> The next thing is also fairly straightforward: we expect Kotlin to drive the sales of IntelliJ IDEA.


I don’t think JetBrains is going anywhere soon; been using their products for almost a decade.

I don’t measure my productivity by how much code I can write, that was just an example.

The way I work designing systems and architecture, I have already made the solution in my head and basically the “coding” part is just trying to get that info out as fast as possible. I have a similar thing to eidetic memory, but I am so ADHD what gets remembered can be random or missing stuff. I remember all code I’ve ever written, seen, or thought about and tools that allow me to basically brain dump this info greatly improve my production, leadership, confidence, and architectural designs.


It is about the feedback-response loop. If I type a character and it doesn't appear (what feels) instantaneous I start to feel physically sick. I have build up some tolerance but I think when I tried Julia with Atom three years ago, I gave up after 15 minutes (atom too much latency if I remember correctly and Julia as well)


> It's doing autocomplete suggestions, linting code as I type

Even so, shouldn’t it be able to do that in 16ms?


Fermi guess: 60 wpm, that's one 5 character word per second, it probably doesn't make sense to give a new prediction more often than once per character -- that's 200ms to chew on autocomplete and linting and fancy animations. Meanwhile, please render the specific character out of your lookup table in 16ms, thank you.

Do modern graphics drivers for X or Wayland hand off font rendering to the GPU? They probably should -- it's maybe 100 or so small textures per font-selection, 150KB or less prerendered with 3-bit alpha, and maybe 100 loaded up into graphics RAM at a time -- 10MB is nothing, really.


they certainly could. Games can do 16ms frames, while calculating full physics simulations.

It's just that text editors of the modern day are programmed by people who prefer to not write it that way - mainly because it's quite hard, and the modern OS doesn't usually fit well into this framework of rendering for multi-tasking. And it takes more effort too.

Much easier to rely on a UI framework which adds overhead. The expectation is that the user probably won't care, and prefer that the software be more feature rich.


I bet some of those UI coded in Delphi versions targeting Windows 95 would be way less than 20MB.


Many years ago I wrote a simple Delphi application that used the standard rich text editor. The size was less than 150kB




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: