> On macOS, for example, Zed makes direct use of Metal. We have our own shaders, our own renderer, and we put a lot of effort into understanding macOS APIs to get to 120FPS.
So they are taking the exact opposite approach of Electron (VS Code).
In my mind, if you're someone who types in HN comments in a rage because your text editor (VS Code) eats up 200+ MB ram, you don't get to cry about Zed not being supported on Linux from day one, because you can't have your cake and eat it too - if you want it on your platform, you gotta wait for the shaders to be written.
> In my mind, if you're someone who types in HN comments in a rage because your text editor (VS Code) eats up 200+ MB ram, you don't get to cry about Zed not being supported on Linux from day one, because you can't have your cake and eat it too - if you want it on your platform, you gotta wait for the shaders to be written.
This is certainly one takeaway. On the other hand, this very blog post points out that a community member, Dzmitry Malyshau, did the work to get the program working on the OS he uses. Malyshau does not work for Zed, which is a for-profit company; as far as I can see, he gets nothing out of working on Zed, except that he and other Linux users get to use it. Perhaps, rather than characterizing Linux users as whiny, we could take away the idea that many Linux users are willing to put in quite a lot of work to make things better for each other.
> Perhaps, rather than characterizing Linux users as whiny, we could take away the idea that many Linux users are willing to put in quite a lot of work to make things better for each other.
That's more an "Open Source" thing rather than Linux specific. A personal case in point was when I got Mellanox adapter support officially added to FreeNAS (now TrueNAS) because that's the adapter brand I had and needed it to work.
Not a super huge effort as the FreeBSD driver worked (so just needed porting), but the integration process and follow up testing/advocacy/etc work wasn't exactly trivial either.
"Perhaps, rather than characterizing Linux users as whiny, we could take away the idea that many Linux users are willing to put in quite a lot of work to make things better for each other."
... and that even if it means signing away their rights in a CLA to a for profit company. This is on another level than contributing to the Linux kernel that is GPL2 plus practically not relicensable because of the multitude of copyright holders.
Or… use one of the existing cross platform toolkits that have existed for decades. Text Editors, traditionally, did not require shaders to run or be performant, and not require entire systems worth of RAM.
Cross platform toolkits, more than other software components incur massive tradeoffs. They’ve written one themselves, tailored to their needs and open sourced it along the way. I guess I don’t see the problem here. If one that’s existed for decades fits your needs better then use that.
Wgpu seems very very well loved & supported, is one of the most successful comings together of the graphics world in ages. I'd love to hear some actual critique of it, hear what people think are shortcomings, because it feels to an outsider like this is the fantasy land, that we're living in the better place now. https://github.com/gfx-rs/wgpu
What I find interesting is that kvark, the open source contributor that made Linux port possible was the main developer on wgpu at Mozilla, yet he decided to build an alternative [1] to wgpu that he used for zed. I wonder what's the rational for that.
Dzmitry gave a talk at a rust gamedev meetup on Blade.
This ain't my turf so apologies for inaccuracies here. It appears to be a fairly novel attempt to write a graphics library with a semi conventional looking pipeline, that under the hood ends up eskewing a lot of the Vulkan concepts. Instead of per object contexts, it uses global contexts to do most work. Instead of taking resources and binding them into descriptor sets to use across pipelines (tracking state), Blade kind of recreates resources on the fly, lets them get used, and disposed of them.
Kind of interesting philosophy of a complex binding model in Vulkan/WebGPU vs a more direct diy render model?
Zed tool this work as an outside contribution. Maybe that someone did the work was good enough. I'm not sure what would make Blade a better match or not, vs wgpu-hal.
The question doesn’t make sense. They don’t rerender the buffer every frame. I assume zed isn’t doing that either as it would be horribly inefficient.
I presume what is meant is that it can handle a redraw fast enough to be in the next frame. In which case the answer is: all of them. Drawing text is not the bottleneck for a GUI program, unless you have a god awful browser stack as your rendering engine.
> I assume zed isn’t doing that either as it would be horribly inefficient.
What are you supposed to do instead? Zed uses the GPU. It's not making calls to retained-mode widgets to individually reposition them, nor is it blitting into a buffer using the CPU. It's using the GPU which eats pixels for breakfast. You've been able to rerender the entire screen each frame for over a decade - just look at Windows 7 Aero, which ran on the laptops of 2009 for the exact same reason: it used the GPU!
Rerendering each frame is wasteful because it keeps hardware from reaching deeper power-saving states. This includes the CPU, GPU and even the display, due to technologies such as FreeSync. On modern hardware, even removing the blinking cursor has been found to save quite a bit of power, by eliminating needless screen redraws.
Zed only rerenders the window each time it changes (and also for about a second since the last interaction for reasons[0]) but every time it rerenders the window it does rerender the entire window and not just the area that changed. That's what I thought GGP was calling horribly inefficient.
Neither Windows Aero nor Zed renders every single frame, 120 times per second. The parent comment is correct is correct that the important thing is to be able to render any given frame in 1/120th of a second, but to leave things alone when nothing is changing.
Zed does do it for about a second following user interaction[0] but I assumed GGP was talking about only redrawing the changed part of the window. Not rendering frames continuously even when nothing is changing at all, which Zed doesn't do (and Windows Aero didn't either, at least to a certain extent).
Yes, it's just rendering into the texture redraws the entire window and you never really need to worry about redrawing only the parts of the screen that changed. But I think GGP was actually talking about redrawing the window every frame even if nothing's changed at all, which is indeed inefficent (though not necessarily "horribly").
GGP is me, and yes that’s what I meant. Under no circumstance would zed actually be drawing 120 frames each second, right? That would be 100x more energy usage than would actually be required, and so I think “horribly” is accurate.
> Under no circumstance would zed actually be drawing 120 frames each second, right?
It does this when scrolling for sure. That's a trivial case where 120 FPS is required on a 120Hz display.
It also does this even when nothing on the screen is changing, but only for about 1 second after the last user input. This is explained in a blog post[0].
Now this does cause more power usage, because when Zed does not do this, the display can actually downclock to save power. But downclocking like that increases latency, which is why they prevent it from happening in the middle of user input (but still allow it to happen betwen each burst of input).
Fps in itself is not important, but it is a substitute for input latency, and if your keystrokes start lagging it feels sluggish. At least historically electron based editors (like atom) has been feeling significantly more sluggish than sublime text or vim with a decent terminal emulator.
For terminal emulators comparisons at least the metrics used are latency and throughput. Now those plus times to do operations (load file, search & replace, etc) wouldn't surprise me to be the comparison metrics for text editors. FPS though feels weird.
Well, that is usually refering to some form of "simulated fluid motion", not new characters appearing and disappearing. The only case where that kinda fluid motion would matter is when you have text scrolling by at semi-fast speeds.
I count myself among the people that would consider >60hz necessary.
For me, it's animations, especially if I'm dragging a window or just the mouse. On 60hz it's nauseating if I'm paying too much attention to the window I'm moving. It's goes completely away around 90-100 Hz (at least for me)
I've had the privilege of using a merely 90Hz display and the difference is still incredible. It gives me input feedback much faster so my brain does not have to do as much buffering/prediction, everything feels a lot more direct. One would think a measly 5 milliseconds wouldn't amount to much, but for input feedback it absolutely does.
I do not suffer from nausea or motion sickness of any kind arising from computers or visuals in general, but I can still easily tell the difference between a 60Hz and 90Hz display. A few months ago I had the privilege of checking out the 120Hz displays on the new MacBooks and they're amazing.
I'd still say that draggin a window would count as a "simulated fluid motion". Maybe not something you'd immediatly think of, but its still trying to convey the sense of motion. Just text appearing and dissappearing isn't something I personally could categorize ass the same type of animation.
And I also count myself as someone who considers 120 at least necessary (on the primary monitor)
> Honest question. What could you possibly do with a text editor at 120 fps that you can't do at 15 or 30?
More-honest-than-it-should-be answer: sell a product to Apple-ecosystem developers trying desperately to find something to justify the $3k they want to spend on a new MBP.
(Typing this very comment in emacs running out of the Linux VM on a mid-range chromebook attached to a 30 Hz 4k television, btw. Come at me, as it were.)
Not really, 120Hz produces a noticeable improvement over 60Hz, unlike "golden boutique speaker wire", just like 4k produces a noticeable improvement over 1080p.
It's not like everyone is going to be able to tell whether a given display is 60Hz or 120Hz, but all other things being identical, they will probably be able to tell which display is faster after using both.
Higher refresh rates tighten the feedback-response loop, creating a smoother and more direct interface to the computer, which is generally perceived as desirable.
Consider VR, where HMDs often have to refresh at 90Hz or 120Hz in order to reduce motion sickness. This actually isn't that different than operating a computer. The brain tends to quickly get very upset when it can't reconcile your visual field with your felt position in space, but even though most people don't get motion sick from looking at a computer display (some do), the refresh rate certainly affects it feels to use the display.
Oh come on. Fancy wires do nothing. 120Hz makes motion much smoother. It also reduces latency. Those make a big difference in many video games, or even just moving my mouse around and having it not skip two inches at a time.
Your cynicism over 120Hz should match your cynicism over 4k.
That, doesn't make any sense 4k allows monitors to push past 24 inches, though, on a 24 or less 2k is plenty for me, but the generation of 22 inch 1080p monitors was rough on the eyes.
Your mouse cursor aliasing test is sort of the tell here. Normal human beings are very hard put to be able to even detect the difference between a 60 Hz and 120 Hz display, and have to resort, as you do, to trickery and artifacts to measure it. And the use case at hand is text editting!
As far as 4k, not sure I understand? It's not a nonsense retina tablet or whatever, it's a 42" television with 100 DPI pixels I can see with my own eyes (well, when I put my reading glasses on -- presbyopia comes for us all). I bought it because it's cheap and it subtends 60 degrees of pixels small enough to be unresolvable, and sits farther than an arms length from my eyes (presbyopia again).
It's hard to tell the difference with smooth motion.
With rendered frames, the stuttering makes it meaningfully harder to click on fast moving things.
> And the use case at hand is text editting!
People were being dismissive about frame rates in general, so I gave an example that wasn't test editing.
The benefit for text editing is much smaller, but also if you're text editing then you don't need significant amounts of compute power to do text at 120. One big criticism disappears. You need that power for games, which actually benefit.
I've used 4k at 30Hz before, but I switched it to 60Hz with chroma subsampling for faster things.
> As far as 4k, not sure I understand?
They're both good but not necessary, and partly situational. But you're choosing to ignore the benefits of one.
A curmudgeon should dismiss both, and most people should want both.
MoltenVK is also a thing. Whatever small translation overhead it incurs is probably not that important for a text editor. And then you get a cross-platform API: not just Linux, but Windows as well. Maybe also other more niche OSes as well.
Molten VK is amazing. When I started working with it, I was expecting a lot of caveats and compromises, but it's shockingly similar to just using Vulkan that you can easily forget that there's a compatibility tool in play.
Probably you can squeeze a bit of optimization out of using Metal directly, but I think it's a more than viable approach to start with Vulkan/MoltenVK as a target, and add a Metal branch to the renderer when capacity allows (although you might never feel the need)
Someone has to pay for it, time, money, wait for it whatever.
I love working with technical people who ask why and think about more optimal paths.
On the other hand I'll listen to the same folks complain about some random app being stinky (I don't disagree) and wonder "Yeah but you going to pay more for that burrito so that company can hire folks to write it natively on every platform?" No you're not ...
I know our customers at times aren't willing to pay / wait for the optimal path, and their customers aren't, so I get it.
Because Macs are expensive (in absolute terms) Mac users are a self-selected group who are willing to spend money. Linux users have high standards and are not willing to spend money. For a text editor in particular Linux is going to be the toughest market because it's already pretty saturated.
> In my mind, if you're someone who types in HN comments in a rage because your text editor
WebGPU exists. It works with Metal. Vulkan could have also worked and MoltenVK would have bridged it to Apple. No, this is just like every other project that only works on MacOS: a mentality I really can't comprehend or explain.
Over the last few years I've had two applications that tended to rot after a week or two running, one is Firefox, which I still use for political reasons, and VS Code, the only Electron thing I've used for more than a few hours. The other being MICROS~1 Teams, which doesn't play nice with my window manager, tries to force me into identifying myself to join some video chat for a bit, and prefers to hang rather than shut down when asked nicely. Instead I join chats without video support.
This is my problem with Electron applications. I'd be fine with them gobbling up a few GB if they were stable.
So they are taking the exact opposite approach of Electron (VS Code).
In my mind, if you're someone who types in HN comments in a rage because your text editor (VS Code) eats up 200+ MB ram, you don't get to cry about Zed not being supported on Linux from day one, because you can't have your cake and eat it too - if you want it on your platform, you gotta wait for the shaders to be written.