> Truth be told, I’m playing the long game here. All I’ve ever wanted from life is a genuinely great SVG vector illustration of a pelican riding a bicycle. My dastardly multi-year plan is to trick multiple AI labs into investing vast resources to cheat at my benchmark until I get one.
If you want electron app that doesn't lag terribly, you'll end up rewriting ui layer from scratch anyway. VSCode already renders terminal on GPU and GPU-rendered editor area is in experimental. There will soon be no web ui left at all
> If you want electron app that doesn't lag terribly
My experience with VS Code is that it has no perceptible lag, except maybe 500ms on startup. I don't doubt people experience this, but I think it comes down to which extensions you enable, and many people enable lots of heavy language extensions of questionable quality. I also use Visual Studio for Windows builds on C++ projects, and it is pretty jank by comparison, both in terms of UI design and resource usage.
I just opened up a relatively small project (my blog repo, which has 175 MB of static content) in both editors and here's the cold start memory usage without opening any files:
- Visual Studio Code: 589.4 MB
- Visual Studio 2022: 732.6 MB
update:
I see a lot of love for Jetbrains in this thread, so I also tried the same test in Android Studio: 1.69 GB!
I easily notice lag in vscode even without plugins. Especially if using it right after zed. Ngl they made it astonishingly fast for an electron app, but there are physical limits of what can be done in web stack with garbage collected js
That easily takes the worst designed benchmark in my opinion.
Have you tried Emacs, VIM, Sublime, Notepad++,... Visual Studio and Android Studio are full IDEs, meaning upon launch, they run a whole host of modules and the editor is just a small part of that. IDEs are closer to CAD Software than text editors.
- notepad++: 56.4 MB (went gray-window unresponsive for 10 seconds when opening the explorer)
- notepad.exe: 54.3 MB
- emacs: 15.2 MB
- vim: 5.5MB
I would argue that notepad++ is not really comparable to VSCode, and that VSCode is closer to an IDE, especially given the context of this thread. TUIs are not offering a similar GUI app experience, but vim serves as a nice baseline.
I think that when people dump on electron, they are picturing an alternative implementation like win32 or Qt that offers a similar UI-driven experience. I'm using this benchmark, because its the most common critique I read with respect to electron when these are suggested.
It is obviously possible to beat a browser-wrapper with a native implementation. I'm simply observing that this doesn't actually happen in a typical modern C++ GUI app, where the dependency bloat and memory management is often even worse.
I never understand why developers spend so much time complaining about "bloat" in their IDEs. RAM is so incredibly cheap compared to 5/10/15/20 years ago, that the argument has lost steam for me. Each time I install a JetBrains IDE on a new PC, one of the first settings that I change is to increase the max memory footprint to 8GB of RAM.
> RAM is so incredibly cheap compared to 5/10/15/20 years ago
Compared to 20 years ago that's true. But most of the improvement happened in the first few years of that range. With the recent price spikes RAM actually costs more today than 10 years ago. If we ignore spikes and buy when the cycle of memory prices is low, DDR3 in 2012 was not much more than the price DDR5 was sitting at for the last two years.
> I never understand why developers spend so much time complaining about "bloat" in their IDEs. RAM is so incredibly cheap compared to 5/10/15/20 years ago, that the argument has lost steam for me. Each time I install a JetBrains IDE on a new PC, one of the first settings that I change is to increase the max memory footprint to 8GB of RAM.
I had to do the opposite for some projects at work: when you open about 6-8 instances of the IDE (different projects, front end in WebStorm, back end in IntelliJ IDEA, DB in DataGrip sometimes) then it's easy to run out of RAM. Even without DataGrip, you can run into those issues when you need to run a bunch of services to debug some distributed issue.
Had that issue with 32 GB of RAM on work laptop, in part also cause the services themselves took between 512 MB and 2 GB of memory to run (thanks to Java and Spring/Boot).
Anyone saying that Java-based Jetbrains is worse than Electron-based VS Code, in terms of being more lightweight, is living in an alternate universe which can’t be reached by rational means.
> GPU acceleration driven by the WebGL renderer is enabled in the terminal by default. This helps the terminal work faster and display at a high FPS by significantly reducing the time the CPU spends rendering each frame
Wow, it's true--Terminal is <canvas>, while the editor is DOM elements (for now). I'm impressed that I use both every day and never noticed any difference.
How much would it cost to just hire a team that will manually review every commit to every popular npm package and its dependencies? Is it possible to make it a working business? Seems like it will cost much less than total damage done by supply chain attacks
There are companies that continuously rebuild popular libraries from source in an isolated environment and serve it to you -- which would eliminate certain types of distribution attack (Chainguard, for example).
I don't know if anyone's doing it at the individual commit level as a business.
I want to live in a condo rather than detached home. Private home is too much of a hassle to maintain properly and also less likely to have many different shops/restaurants within 5 min walk
Because it lowers the threshold for a total informational compromise attack from "exfiltrate 34PB of data from secure govt infrastructure" down to "exfiltrate 100KB of key material". You can get that out over a few days just by pulsing any LED visible from outside an air-gapped facility.
There are all sorts of crazy ways of getting data out of even air-gapped machines, providing you are willing to accept extremely low data rates to overcome attenuation. Even with million-to-one signal-to-noise ratio, you can get significant amounts of key data out in a few weeks.
Jiggling disk heads, modulating fan rates, increasing and decreasing power draw... all are potential information leaks.
As of today, there's no way to prove the security of any available cryptosystem. Let me say that differently: for all we know, ALL currently available cryptosystems can be easily cracked by some unpublished techniques. The only sort-of exception to that requires quantum communication, which is nowhere near practicability on the scale required. The only evidence we have that the cryptography that we commonly use is actually safe is that it's based on "hard" math problems that have been studied for decades or longer by mathematicians without anyone being able to crack them.
On the other hand, some popular cryptosystems that were more common in the past have been significantly weakened over the years by mathematical advances. Those were also based on math problems that were believed to be "hard." (They're still very hard actually, but less so than we thought.)
What I'm getting at is that if you have some extremely sensitive data that could still be valuable to an adversary after decades, you know, the type of stuff the government of a developed nation might be holding, you probably shouldn't let it get into the hands of an adversarial nation-state even encrypted.
> The only evidence we have that the cryptography that we commonly use is actually safe is that it's based on "hard" math problems that have been studied for decades or longer by mathematicians without anyone being able to crack them.
Adding to this...
Most crypto I'm aware of implicitly or explicitly assumes P != NP. That's the right practical assumption, but it's still an major open math problem.
If P = NP then essentially all crypto can be broken with classical (i.e. non-quantum) computers.
I'm not saying that's a practical threat. But it is a "known unknown" that you should assign a probability to in your risk calculus if you're a state thinking about handing over the entirety of your encrypted backups to a potential adversary.
Most of us just want to establish a TLS session or SSH into some machines.
While I understand what you're saying, you can extend this logic to such things as faster-than-light travel, over-unity devices, time travel etc. They're just "hard" math problems.
The current state of encryption is based on math problems many levels harder than the ones that existed a few decades ago. Most vulnerabilities have been due to implementation bugs, and not actual math bugs. Probably the highest profile "actual math" bug is the DUAL_EC_DRBG weakness which was (almost certainly) deliberately inserted by the NSA, and triggered a wave of distrust in not just NIST, but any committee designed encryption standards. This is why people prefer to trust DJB than NIST.
There are enough qualified eyes on most modern open encryption standards that I'd trust them to be as strong as any other assumptions we base huge infrastructure on. Tensile strengths of materials, force of gravity, resistance and heat output of conductive materials, etc, etc.
The material risk to South Korea was almost certainly orders of magnitude greater by not having encrypted backups, than by having encrypted backups, no matter where they were stored (as long as they weren't in the same physical location, obviously).
>While I understand what you're saying, you can extend this logic to such things as faster-than-light travel, over-unity devices, time travel etc. They're just "hard" math problems.
No you can't. Those aren't hard math problems. They're Universe breaking assertions.
This is not the problem of flight. They're not engineering problems. They're not, "perhaps in the future, we'll figure out..".
Unless our understanding of physics is completely wrong, then None of those things are ever going to happen.
According to our understanding of physics, which is based on our understanding of maths, the time taken to brute force a modern encryption standard, even with quantum computers, is longer than the expected life of the universe. The likely-hood of "finding a shortcut" to do this is in the same ball-park as "finding a shortcut" to tap into ZPE or "vacuum energy" or create worm-holes. The maths is understood, and no future theoretical advances can change that. It would involve completely new maths to break these. We passed the "if only computers were a few orders of magnitude faster it's feasible" a decade or more ago.
Sorry, I don't think this is true. There is basically no useful proven lower bound on the complexity of breaking popular cryptosystems. The math is absolutely not understood. In fact, it is one of the most poorly understood areas of mathematics. Consider that breaking any classical cryptosystem is in the complexity class NP, since if an oracle gives you the decryption key, you can break it quickly. Well we can't even prove that NP != P, i.e., that there even exists a problem where having such an oracle gives you a real advantage. Actually, we can't even prove that PSPACE != P, which should be way easier than proving NP != P if it's true.
OTP can be useful especially for backups. Use a fast random number generator (real, not pseudo), write output to fill tape A. XOR the contents of tape A to your backup datastream and write result to Tape B. Store tape A and B in different locations.
But you have one copy of the key stream. It is not safe. You need at least two places to store at least two copies of the key stream. You cannot store it in non-friendly cloud (and this thread started from backing up government sensitive data into cloud owned by other country, possibly adversary one.
If you have two physically separate places which you could trust key stream, you could use them to backup non-encrypted (or "traditionally" encrypted) data itself, without any OTP.
You may want some redundancy because needing both tapes increases the risk to your backup. You could just backup more often. You could use 4 locations, so you have redundand keystreams and redundant backup streams. But in general, storing the key stream follows the same necessities as storing the backup or some traditional encryption keys for a backup. But in general, your backup already is a redundancy, and you will usually do multiple backups in time intervals, so it really isn't that bad.
Btw, you really really need a fresh keystream for each and every backup. You will have as many keystream tapes as you have backup tapes. Re-using the OTP keystream enables a lot of attacks on OTP, e.g. by a simple chosen plaintext an attacker can get the keystream from the backup stream and then decrypt other backup streams with it. XORing similar backup streams also gives the attacker an idea which bits might have changed.
And there is a difference to storing things unencrypted in two locations: If an attacker, like some evil maid, steals a tape in one location, you just immediately destroy its corresponding tape in the other location. That way, the stolen tape will forever be useless to the attacker. Only an attacker that can steal a pair of corresponding tapes in both locations before the theft is noticed could get at the plaintext.
I think you assume that encryption keys are held by people like a house key in their pocket. That's not the case for organizations who are security obsessed. They put their keys in HSMs. They practice defense in depth. They build least-privilege access controls.
reply