I just refuse to use any software that baloons it's filesize. Not because I can't afford storage, but because there are always alternatives that have similar features and packed into fraction (usually less than 1%) of filesize. If one of them can do it and other can't, it's a bad product, that I have no intention to support.
We should strive to write better software that is faster, smaller and more resilient.
"Storage is cheap" is a bad mentality. This way of thinking is why software only gets worse with time: let's have a 400mb binary, let's use javascript for everything, who needs optimization - just buy top of the shelf super computer. And it's why terabytes of storage might not be enough soon.
I can empathize with how lazy some developers have gotten with program sizes. I stopped playing CoD because I refused to download their crap 150+ GB games with less content than alot of other titles that are much smaller.
That said, storage is cheap, it's not a mentality but a simple statement of fact. You think zed balloons their file sizes because the developers are lazy. It's not true. It's because the users have become lazy. No one wants to spend time downloading the correct libraries to use software anymore. We've seen a rise in binary sizes in most software because of a rise in static linking, which does increase binary size, but makes using and testing the actual software much less of a pain. Not to mention the benefits in reduced memory overhead.
VSCode and other editors aren't smaller because the developers are somehow better or more clever. They're using dynamic linking to call into libraries on the OS. This linking itself is a small overhead, but overhead none-the-less, and all so they can use electron + javascript, the real culprits which made people switch to neovim + zed in the first place. 400mb is such a cheap price to pay for a piece of software I use on a daily basis.
I'm not here to convince you to use Zed or any editor for that matter. Use what you want. But you're not going to somehow change this trend by dying on this hill, because unless you're working with actual hardware constraints, dynamic linking makes no sense nowadays. There's no such thing as silver bullet in software. Everything is a tradeoff, and the resounding answer has been people are more than happy to trade disk space for lower memory & cpu usage.
For private repos there is Forgejo, Gitea and Gitlab.
For open-source: Codeberg
Yes, it'll make projects harder to discover, because you can't assume that "everything is on github" anymore. But it is a small price to pay for dignity.
Great puzzle. I solved it in Nim[1]. Bit-twiddling layers were a good fit for the language.
The trickiest part was the AES layer. I had to reach for the C FFI to use OpenSSL for decoding. And it took longer than expected to search instructions for key unwrapping.
Full solution[2] runs in under 20ms on two decade old Intel Xeon, which says more about Nim's performance than my coding skills.
If you have root, it is fairly trivial to run full-on arm64 Linux distro in chroot. That should fix most problems with linux software running in android.
There is also Linux Deploy[1] that automates the process of setting up chroot, ssh and even gui desktop (through framebuffer).
I think this name would be confusing.
For one - it is for linux, not windows.And it is a subsystem running Windows. So, it should be called Windows Subsystem for Linux, or WSL.
I think this may be a woosh moment where they're saying the Microsoft version should be called LSW because it's for Windows. Probably sounds more obvious with a more sarcastic tone
The concept of a "subsystem" in Windows has evolved since the operating system's inception when Windows NT was designed to support multiple operating system environments through distinct subsystems. Win32 subsystem, which features case-insensitive filenames and device files in every directory, and the POSIX subsystem, which supports case-sensitive filenames and centralized device files: Windows subsystem, the Subsystem for Unix-based Applications (SUA), and the Native subsystem for kernel-mode code were the main subsystems at first.
/SUBSYSTEM linker switch was used to specify the target subsystem at compile time, enabling applications to be compiled for different environments such as console applications, EFI boot environments, or native system processes.
In this nomenclature, WSL follows the original naming conventions (although SUA should have been called WSUA).
Except WSL doesn't actually use any of the nt subsystem machinery in either of its incarnations.
And also, it doesn't really follow that nomenclature. Those all follow "user code target" Subsystem. Windows Subsystem, OS/2 Subsystem, Posix Subsystem, etc.
That is also a disadvantage, Github has a lot more grifters, people submitting fraudulent and malicious PRs, issues spam. In similar vain as "everybody is on windows" and Linux not being targeted by malware as often.
If a person really cares about your project and wants to improve it and not just boost their own GH stats - creating an account takes no time or they can always send you patches via email.
Sure, for my private projects I already run my own Gitea and Woodpecker CI (and my own docker registry, and my own Taiga server for project management, and my own baserow server to replace airtable, etc...) but the moment you say "just get a VPS to run this service that is available for free at $BIGCORP", you lost 90% of the potential users.
Is it really free, though? You get free service - MS gets everyone's code for free. Only a fool would believe that they don't use private repos for training.
And even if it was free, do you really believe it is sustainable to just offer unlimited service for free to anyone? They've created an environment where you're punished for using anything but github. This is not good.
You don't need to convince me, you need to convince the millions of people who prefer the convenience of "Free as in Beer SaaS" over the resilience and self-sufficiency that we get by hosting our own systems.
We should strive to write better software that is faster, smaller and more resilient.
"Storage is cheap" is a bad mentality. This way of thinking is why software only gets worse with time: let's have a 400mb binary, let's use javascript for everything, who needs optimization - just buy top of the shelf super computer. And it's why terabytes of storage might not be enough soon.
reply