Having used windows for several years, linux for several years, and currently macos for several years (because that's what work gave me) - linux is by far the most stable and easiest to manage OS, after you get past the initial learning curve. The main reason is you can actually fix problems. You pretty much know why every single bit of software is there for and how to configure it. You google the problem and someone has released a patch, or instructions on what configuration changes need to be made. Or you can roll back to old versions of software easily.
Also you are a hero because you catch and fix production problems nobody else sees or understands because production is running linux.
If you have a problem on OSX you just get 500 people telling you to reset your pram, and really you just need to wait for the next osx update. You certainly can't rollback an osx update. Nobody knows whats wrong because nobody has access to the code. Even if you did - each update is millions of lines of code. It is crazy town.
Same for windows except there is not pram to reset. You just reboot until the next update I guess.
Yeah, the "community troubleshooting" on MacOS for anything beyond moderately complex is kind of funny, until you're the person trying to get help. Most of the time it involves five people stabbing in the dark with solutions that seem to solve the problem for 20% of the people involved, so its marked as the solution, even though it probably didn't actually solve anything, and those 20% just got lucky because the "solution" required a restart and that's what "fixed it".
I swear, every problem I've Googled, then subsequently ended up on SO or a github issues thread, someone has suggested "raise the open file count limit"; its like the the default "try this and then come back" answer.
>linux is by far the most stable and easiest to manage OS, after you get past the initial learning curve. The main reason is you can actually fix problems. You pretty much know why every single bit of software is there for and how to configure it.
So much this. I used windows for a lot longer than i've used linux. Maybe 10 years using linux exclusively now and 15 years or so with windows before that.
Even now when I fix problems on windows I feel like a wizard tinkering with vaguely mysterious arcane things i'm only mostly sure of the purpose behind.
When I fix things on linux I feel like a mechanic working with a detailed manual and diagram of my system isolating known things and fixing that one a few things I know for sure could be the problem.
For me though, it was when windows corrupted the master boot record on my hard drive. I ended up running linux partitioned across 4 16GB USB sticks I had lying around my house until I bought a new hard drive(I tried for a while to fix my hard drive, nothing I did worked. I'm not actually too sure what was wrong with it in the end.). The fact that not only could I do that, but it was easy and I could see and access everything on my hard drive still. It also ran faster than my windows install had. It was a revelation.
Ever since then i've kind of pictured windows as a massive parasite feeding off your computer, controlling it from within, while you're kinda there from the outside.
> If you have a problem on OSX you just get 500 people telling you to reset your pram, and really you just need to wait for the next osx update. You certainly can't rollback an osx update. Nobody knows whats wrong because nobody has access to the code. Even if you did - each update is millions of lines of code. It is crazy town.
This is definitely true, and even Apple support can't really help you. It seems like they're only trained to troubleshoot the most basic stuff, and won't even do that if you have any "third party software" installed that they can blame.
However, if you try really hard and are lucky, you can sometimes downgrade. I recently had an awful freezing issue with a 2018 MBP that shipped with Sierra and persisted past a logic board replacement. I was eventually able to solve the issue by downgrading to High Sierra, but wasted a huge amount of time blindly troubleshooting and trying to figure out a downgrade procedure that worked.
Apple really needs to invest more in rigorous QA if it wants to continue down the path it's seems to want to follow.
Yep, Apple support forums are absolutely dreadful. Way worse than Windows and way, way worse than Linux. You will find virtually nothing more than well-intentioned fumblers reciting some irrelevant plist tweak that maybe appeared to fix their problem.
Say what you will about Microsoft--the MSDN forums and technical articles have some meat on the bone. People who actually understand, and are paid to understand at a deep level, DCOM, the registry, or whatever, they are out there. You've got the MVP program, super built-out partner channels, etc.
With Apple, my distinct impression is that there do exist people who understand MacOS deeply, but they have zero interest in participating in the support communities, due to the way those communities appear to be built and (not) prioritized by Apple Inc. If there's anything even a little bit like the MS partner network, well, I'll be... really surprised. So all you get is duffers and superstitious folk, and the feedback loop intensifies the problem.
>> but they have zero interest in participating in the support
It doesn't help that if you post any kind of technical instruction or information to the Apple site that doesn't involve a "Go to system preferences and uncheck this box", Apple will delete under the reason "this coiuld amage another users Mac"
Part of this is no national chains are on MacOS (Walmart, Kroger & ilk are on Suse w/SuperPOS Ace, Costco just switched to Windows using an oddball NCR product), and few megacorps have gotten big on MacOS in a way where they develop IT talent that deeply understands MacOS. Thus, the pool of talent is extremely small, and the ability to self teach (closed source software and all) is really limited.
In my opinion this is great for professionals, but it's also the reason why the year of the Linux desktop (™) will never come. I recently had to configure a raspberry to drive a display 24/7 and the simple task of keeping the display from blanking already was a greater deal than it should be. This is true across a lot of options that are simple checkboxes in Windows/Mac (In Linux you need to install the right tool or desktop environment. Not something end users are expected to do really.
> This is true across a lot of options that are simple checkboxes in Windows/Mac (In Linux you need to install the right tool or desktop environment.
It's a simple checkbox if that checkbox exists. But that's true on Linux as well.
If it doesn't exist, with macOS at best there is an obscure plist file you can edit and at worst it's hard-coded in some binary you don't have the source for. With Windows it will almost always have some registry setting controlling it, but you can't plausibly claim that to be the user-friendly option.
Moreover, "install a different desktop environment" sounds like a big deal until you realize that is just a checkbox in the package manager.
I haven't been a part of the community for 3-4 years, but whenever someone had a question or issue with Ruby on the Mac, severeal very knowledgeable people were always quick to point out the issue's cause and a fix, workaround, or explaination of why it was that way.
Now, random, "Why did my mac start/stop doing x", I'll give you that. Especially on a site like Macrumors forums or Apple's useless support site.
I know I would come off as a contrarian but I cannot echo your experience. My experience with Linux and BSD translated quite well to macOS. I've been using all these OS's to varied degrees for over 20 years now (eek just realized it) I don't find macOS's problems to troubleshoot and resolve to be any different from Linux or BSD. Just my $0.02.
Recently built a machine and had some setup issues and was quite dumbfounded by the lack of community support on windows. 90% of search results aren't relevant, and the rest are dead ends. Realized it was the result of ms acting as a source-of-truth.
Anecdotally, this is true for me. My Arch machine runs for months with no issues. My XPS running 10 blue screens once every several weeks, and generally has issues after running too long.
To be fair it could be due to the general buggy-ness of the TB16 Thunderbolt dock.
Yeah of course, different people will have different experiences. Mine was being never able to make wifi work, random destruction of the GUI when I plugged an external monitor, difficulty to install and configure eastern languages IME, sound issues, etc. That was a very instructive period as undergraduate in informatics, but I got tired after a while to constantly fix my system instead of getting things done. Still using it as a server OS, where it shines.
I've jumped from Debian 7 to Debian 9 without issue, and the Gnome Software Center makes installing apps a point and click process (reminds me of Click N Run from Linspire...).
There are notable fit and finish issues with other distros, but from what I've seen the Debian package maintainers consistently make decisions to protect the package archive from breakage, whether that be holding back a FreeCiv point release over a buggy UI element, or stripping out non-free parts of Chromium that upstream bundles.
It depends what distribution you're running (and what time we're talking about, ten years ago was a different situation), but if you're on a mainstream long-term release like Ubuntu, Fedora, Suse and so on, it's pretty smooth nowadays, on average at least.
Yes, there's still the 1-2% of roughness around the edges, but as OP pointed out, on linux you can actually fix stuff. It is definitely not an "april fools" worthy statement. Linux is really stable these days.
I've used Linux as my main desktop OS for about 3 years, from around 2010 to around 2013 (so maybe some things changed in the meantime).
The article says that you pay a "VM tax" by running in a Linux VM in Windows or by using WSL, but by using Linux directly you end up "paying" in other areas, for example in terms of less-than-perfect drivers (which leads to problems such as worse battery life, worse graphical performance etc). In an ideal world, laptop and GPU makers would put as much effort developing Linux drivers as they do when they create Windows drivers, but unfortunately this is not the case.
Another point is that, even if you are running Linux on your dev machine, it is often challenging to have a local environment that is as close as possible to your production one; for example where I work we have Ubuntu 16.04, Ubuntu 18.04 and CentOS (and with various combinations of installed packages), even if I was using a Linux laptop, I would still need containers to get all the different environments right. The only case where you could use no containers at all is if you had all of your servers running the exactly the same environment.
Finally there is the issue of some pieces of software not being available for Linux, 95% of the times you can find a Linux equivalent that works for you but, in my experience, there were still rare cases where I had to resort to Wine or some Win VM just to run some specific tool that was needed to do my job.
PS My job, at the time I was using Linux as my main desktop OS, and during the couple of years after I switched back to Windows, was about PHP, Python and Java web apps deployed on Linux servers, so my comments above should be taken in this context. Maybe in other fields of software development there are different factors to take into account.
PPS I have also tried MacOS, on the one hand it's great that Mac is UNIX, but on the other hand it's not Linux so, if you want to closely replicate your production Linux environment, you will still need VMs or containers.
As with everything, take "almost no cost" with a grain of salt. There sure is situations where docker can be slow compared to bare metal, even on linux.
If you are running on a Linux host on bare metal, Docker containers ARE bare metal - they use kernel primitives such as cgroups and namespaces, not hardware virtualisation.
You don't have to use virtualization for something to be slow.
Most things are running basically the same when using containers, but not everything. Take networking as an example: That's usually not working as it would on bare metal and usually result in a significant performance loss.
absolutely if you have a filesystem heavy operation and are accessing files inside the container in aufs instead of a local mount, you are going to pay a massive docker tax.
Just use overlayfs and your performance issues are history. Works perfectly on Ubuntu 16.04 and I hope nobody needs to use significantly older systems.
The significant cost on Windows is due to having to spin up a Linux VM in the background, right? So the cost is probably more due to the VM than Docker itself?
1. is the VM
2. is handling filesystem mounts across operating systems
(2) can be debilitating in dev environments on Mac for example where you want to edit files outside the container but have the container have access to them. Docker have done a lot to improve this over the years but it is still very painful.
If I may abuse this metaphor a little further, I'd like to point out that running Linux also pays dividends by teaching you, well, Linux. It is the platform that most code ends up running on, after all.
You'll know it is worthwhile when you're able to comfortably debug production problems by hopping on the server and looking at all layers of the stack, or when you make quality of life improvements for your team with a quick shell script.
Yes, this is absolutely true, I should have included this part in my post above. I am considered the "Linux specialist" at work but I never really had to "study Linux", I just learned almost "effortlessly" by just using it day by day.
I picked a Dell XPS for development and it runs linux amazingly well. Only bit that doesn't work out of the box is the fingerprint scanner but there are unofficial drivers for it.
With web development its so trivial to do everything you need on linux and almost all of the time its easier to set up on linux or mac than on windows.
Agreed, using an XPS here too. I remember when I started looking for webdev tools back when I moved to Linux as a daily driver. I was bummed that there weren't any really nice FTP clients. Turns out my file manager was a step ahead of me with ssh:// and various other niceties. It was all built in.
Lol, thanks for pointing that out as I was not aware of it. Looks like now FileZilla has support for a master password that's used to encrypt stored passwords.
I give anyone who asks a "hard no" on FileZilla, certainly not least because it came (maybe still does come) packaged with crapware/malware depending on your chosen author-sanctioned download location.
I would never develop on anything but linux, but I definitely know the struggle of drivers and configuration well. It is relatively pain free these days. But I still have to manually find my wifi drivers from a strangers github repo and build from source to install them.
WSL isn't too bad nowadays. I would go as far as saying it's very good. I've been using it for full time development for over a year now.
I've run 100,000+ line Rails apps in WSL (well technically through Docker, which I connect to through WSL) and I never noticed a slowdown that was bad enough to make me think "this sucks". It's always been pretty good. I run all sorts of Rails, Flask, Phoenix and Webpack driven apps and all of them run fast enough where I don't think twice about it.
Personally, I find the WSL set up somewhat close to native Linux in terms of the user experience. I'm not talking about I/O performance, but I mean how it feels to use the OS in general.
For example:
I spend 99% of my time in a WSL terminal using tmux + terminal Vim + ranger. So that takes care of coding and managing files.
Then I use a browser and other graphical apps (image / video editors) that run directly in Windows.
Dexpot sets up virtual screens with key binds that are comparable to how i3wm lets you switch and move windows to another screen.
Keypirinha lets you launch apps with a fuzzy finder (like dmenu but better IMO)
AutoHotkey lets you easily manage global hotkeys (just like i3 does) and more
When you put all of that together, you get a really really good development experience that also doubles for gaming and running programs that don't have a good alternative on Linux (such as Camtasia on Windows).
Then for the icing on the cake, since you're running Ubuntu 18.04 in WSL, you can provision WSL itself with the same exact configuration management scripts you would use to provision a production box. For me specifically I run all of the same Ansible roles against WSL that I do in production. I can set the whole thing up with 1 Ansible command. Plus my dotfiles also happen to work exactly how they do on my native Linux laptop so it's easy to keep things in sync and feeling the same.
This all runs from a i5 3.2ghz / 16GB of RAM / SSD / etc. $750 desktop built 5 years ago.
Even if Apple tax didn't exist I would still use this Windows / WSL set up if I weren't in a position where I could run Linux natively.
? What I see when I look at that benchmark is a comparison between linux running in a VM, via WSL and natively.
AKA, VMs are slower than bare metal. News at 10.
Sure there are things windows tends to be slower at, but similarly there are things linux tends to be slower at. For general desktop usage I think you will find two things, windows does quite well, and desktop level virtualization tends to be slower than server virtualization because most desktop users will be using a virtio/emulation IO access method vs servers which tend to be punching PCIe adapters into the VMs. Either way, the overhead of additional translation layers and VM exits for various things is always going to be worse than just running on bare metal. Wether that works out to barely noticeable for compute heavy benchmarks that are TLB friendly, 2x for small packet edge case IO's, or somewhere in between is completely application dependent.
The news is not the it is slower, it is how much slower it is. I found the number surprising given the prevailing wisdom out there was ... sure its slower, about 5-10% slower.
The other news is that mac is slower and it does not do any virtualising.
The other interesting part of the benchmark is the i7-8559U (Coffee Lake, 4C8T, 2.7GHz) macOS vs i7-8750H (Coffee Lake, 6C12T, 2.2GHz) Ubuntu, on a single-threaded benchmark ("We are going to adopt parallel testing for our dev environments this year").
MacOS took 1.6x as long as Ubuntu, despite a 1.2x GHz advantage, for an overall 1.9x slowdown core-per-core on the same microarchitecture.
Would you like 2x performance? Just replace macOS with Ubuntu.
"There is no other operating system out there that competes against us at this time" - Greg Kroah-Hartman, 2018
If you run Linux on vanilla Intel ultrabook or Intel desktop/server hardware, it is not only blazing fast, but all the hardware works out of the box, too.
I wrote about this in the context of Lenovo X1C laptop here:
I also recently built a Linux desktop from ~$950 of commodity parts (including a pair of free GPUs a crypto friend donated to me after his startup died). I use it as a devserver and a GPU rig for playing with CUDA, PyTorch, and TensorFlow.
Aside from the requirement of proprietary Nvidia drivers, this whole box works perfectly in Linux too, is blazing fast, and operates silently and with low power consumption. (The case itself is bulky, but it’s stationary.) I think an equivalent Mac would cost a 2-3x multiple and run less well for developer workloads.
Say what you will about Linux, but if you choose your hardware carefully, it truly does “just work” these days. And you can’t beat having access to scriptable everything and source code everywhere. That said, I keep a Mac Mini around because there is some proprietary stuff you can’t avoid in the Apple ecosystem (e.g. XCode for iOS, Safari Debugger, Keynote/Pages, ...)
I completely understand the concern here. If every change required an 8 minute wait it would be real sad for us.
We use a tool called `rake autospec` at Discourse which is a smart test runner. I personally like the vim integration we built.
The way it works...
1. Run `bin/rake autospec` in a terminal
2. I head to the code I want to change, I change the code
3. The spec runner figures out what the right test is run and runs that.
4. While it is busy running specs at any time it can be interrupted by saving a "*.spec" file in which case the spec runner will run the spec at the cursor
That's exactly what I thought - a 50% faster test suite run is nice I guess, but they're all still way too slow to be running all the time. I'll run the tests in 1 file or subset of files or rspec describe block or something that's relevant to what I'm doing and finishes in <10s. Save the multi-minute full runs for CI or big changes where you have no idea what will happen.
In the late 90s I installed Slackware on a 386DX 40mhz with kernel 1.2.8, XFree86, elvis vi clone, etc. I've been using Linux for a long time, on and off as my main desktop. I have various Linux machines around the house, but it's not my main desktop today.
There's too much fiddling around to get it working right as a desktop. Audio has always been a ghetto. Wifi drivers are still binary blobs. Fonts, hidpi, multi monitor support, wayland, systemd are all still issues today. I really don't want my development machine to match production. Production is a stripped down image for a reason. I spend far more time writing code than running it, so the VM performance hit is a moot point.
To be fair, Windows 10 has plenty of issues of its own that need tweaking, like having the only way to disable cortana is through group policy edits. And then they reenable it on the next update, and give me some candy crush ads in my start menu. It's pretty infuriating, but still less so than manually fixing wifi drivers through USB boot drives.
I've been using Linux as my main OS for over 10 years now, mainly Ubuntu.
The best part is the performance and a realistic bare metal environment. You can profile code and actually trust the results. You can't say the same for virtualizarion, containers or other platforms.
Well, I use an Ubuntu host on a Dell XPS (since it just works without extra configuration).
But I also use an Ubuntu guest VM for development so that I can shove the guest onto a different machine easily (yeah, I know I should use continuous integration, but I haven't yet).
Using a VM definitely has a big performance cost... Even though I value the benefits I accept the costs.
I'm thinking of renting a Vultr bare metal server for an hour or two to do the test you're suggesting. What's the easiest way to spin up a VM with KVM from the command line these days? Is it still libvirt's virsh?
> Here I am freaking out about a measly 38% perf hit when I could be running stuff concurrently and probably be able to run our entire test suite in 2 minutes on my current machine on Windows in a VM.
I think this is the point, given you can get an inexpensive desktop CPU with 6 cores and 12 threads, it wouldn't matter so much if you had a 20% perf tax with WSL when you have approx 12x the throughput.
You could always offload full test suites to an external CI box upon check-in and work with a subset of unit tests locally, once the suite becomes prohibitive for a single machine.
I love OS X, but it's just 'slow'. I have no problem admitting that.
It's much better than it was in the PowerPC days, when (IMO) OS X was almost too slow to be usable. We have it good these days with faster graphics and 12-core CPU's and super fast SSD's...
But you give these same advantages to Windows/Linux and OS X is just handily beaten in just about every benchmark you can think of.
I have been using Linux as my daily driver for a little over 2 years and day to day, I haven't had practically any problems. Every once in a while, I will mess with something and then things break, but I know I can fix it. Knowing I can find a fix to any day to day problem has become absolutely crucial to me; I can't go back to chasing windows error codes.
I’ve been doing an increasing amount of Node development in WSL, and the thing that helped me the most where it regards disk performance was adding an exclusion path to Windows Defender so that it wouldn’t constantly scan my WSL working tree.
Also, I get it that some kinds of testing “feel better” when done locally, but... why not have those run on a cloud VM?
Ruby has always been a nightmare on Windows. Every time a new junior freelance developer is onboarded in a Rails project everybody hopes he's got a Mac or Linux laptop. A significant number of them uses Windows and we always suggest them to run a Linux VM. Some of them resist the suggestion. We make clear we're not wasting time to support their environment (we're deploying on Linux anyway) and that they are on their own. After not to much time they comply or leave. After months some of them could even buy a Mac or dual boot on Linux (Windows is good at videogames.)
Still, AFAIK VirtualBox has a problem with supporting multiple cores. Single core VM is heavily handicapped even for Ruby. Parallel testing was not in Rails but it's definitely a thing.
Maybe your company should provide the optimal hardware which everyone must use? Each employee using a similar environment is a productivity boost. And I can tell you, that if everyone would use the same system, even windows, you would have less problems.
Sounds alot simpler than letting everyone use their own environment..
Not my company (I'm a freelancer) and every freelancer brings his own laptop. Furthermore all of us work from remote, often even employees do. I've seen this happen everywhere with the small/medium customers I'm working for.
It never occurred to him to run Linux on his machine, for real work, and keep Windows in a VM for when he really needs it?
Surely someone, somewhere has explained that Windows runs faster in a VM than on bare metal, because Linux is better at file systems and buffer management? Or that relying on drivers from all over makes your system less reliable than the fixed set used in the VM?
Me, I cannot imagine running MSWindows on bare metal. It just feels wrong.
I gave Hyper-V way too much of my time last tried setting up a development environment on windows (around September 2018). The Hyper-V tooling just isn't good enough for the developer use case compared to Virtualbox and VMWare, and once you enable Hyper-V, it owns the virtualization environment within Windows, meaning you can't run other hypervisors on the OS. That was a deal-breaker for me, as Docker for Windows wouldn't even run out of the box.
I'm not sure about the beginning, but it uses Hyper-V for a while. I don't know about that bug, I used it with Windows 10 Pro, I guess bugs happens, probably they will fix it soon if it's widespread. I know that you need a special Docker package (Docker Toolbox) for VirtualBox and default Docker package from their website requires Hyper-V.
To me, subjectively, VMWare is much, much faster when it comes to GUI stuff. I suspect GPU virtualization plays a role there. I haven’t compared raw compute performance (build/test workflows, for instance) in a terminal, though.
I wonder how big a role Windows Defender, or other antivirus software, plays in the notorious inefficiency of reading and (especially) writing many small files on Windows. Sam, did you disable Windows Defender real-time protection, or add exceptions for it?
When I started running the spec suite in WSL, defender went way up (so I gave it a break), pretty sure stuff would have been way worse with defender on.
The prevailing recommendation though from the WSL team is to leave defender on for now. So I kind of cheated by disabling it.
I added parallel tests to our app last year. My 4 core/8 thread system runs the tests pretty well with a process count of 6. The suite runs about 4 times faster, which makes a real difference
Why do people run their server software on their desktop systems instead of, say, a proper server? Back in the days our desktop systems were VT102s, NCD X-Terminals or underpowered laptops and developing on the server through rsh/ssh or even VNC was natural. Does Ruby somehow require locally running GUI processes?
There's certainly nothing stopping you from doing Ruby or JS dev work on a remote server. I run Windows on my laptop, so when I wanted to start contributing to Discourse on the side a few months ago, I ordered a VPS at DigitalOcean and did my work there over SSH. I used SSH tunneling to access the dev HTTP server.
I’ve done similar things with pycharm. I can run pycharm locally but run a remote python process. .py files are automatically uploaded and I can debug over the network just fine. I know eclipse, and presumsbly others, can do the same.
At my current company we do exactly this. It’s nice because it’s someone else’s problem to make sure my dev environment works, I can recreate my dev environment with a single command, and I have a shareable URL to my Rails instance that anyone on the eng VPN can use (makes it easy to show works in progress to teammates or to the product team).
It does kind of suck in some ways though. I can’t develop without internet, and debugging is worse than debugging locally. The debugging issue could probably be alleviated with some more investment in developer tooling/editor integrations.
Since linux became the OS of the internet, people have been able to run the same software on their own computer. It's more convenient to just run the "server software" on your own machine and you have full control over it.
> Because you can, Convenience, and compatibility.
You shouldn't shoot yourself in the foot because you can - it doesn't seem particularly convenient to wait > 10 minutes for tests because the laptop CPU is overheating when a proper development server would be twice as fast even with (clumsy) single-core tests.
Unless you are looking to contribute to a project and kick off a project I would recommend sticking with the bigger players our there like nodebb/vanilla and so on.
Getting stuff right is not easy, very few forum software platforms out there for example have a bug bounty like (https://hackerone.com/discourse ), last thing you want to do is deploy an XSS hive out there to the public.
As far as I know there is no small-medium well supported Discourse alternative written in Go. There is a Slack alternative though written in Go called mattermost which I can recommend.
I don't know on what depends, but I run a small Discourse forum... and I can tell you is a major PAIN!
I'm not a web developer (I develop mostly firmware) so I'm not into all this kind of stuff, but I know it is written in Ruby and runs in a Docker container.
It requires at least 10GB just to be able to install and 20GB just to update once in a while. It takes ages to compile. It's a PITA to update. Plus docker gets stuck with old images (not sure it's the right term) and you'll have to manually clear them.
20GB!! It's a freakin' forum software!!! Some triple A games are around 8GB.
I'm glad someone is asking for alternatives. I think all this overhead is because maybe redistributing a Ruby website it's a PITA and requires a full environment installation. And hell yeah I believe a simple test suite can take 10 minutes to run in that poorly conceived environment.
I don't want to seem rude, but to my ignorant eyes and those of its users, it's a forum software that requires a 10GB hard disk to install, and 20GB to update. I know that there will be a lot of technical details that will justify (or not) why, but, hey, that's a lot for a forum and there is no sane alternative way to install it.
I know. It surely sounds silly, but my guess is that instead of having to troubleshoot thousands of different possibl configuration with different dependency and OS library. The way it is currently distributed is not just "forum" software, it is the whole OS + Database and All other Software required along with Backup etc.
I wonder what annoys the Discourse team more: actual problems with Ruby, or discussions of how much better language X would have been compared to Ruby. I think Sam made a goode point on this sub-thread; the maturity and overall health of a project are more important than what language it uses.
(Disclosure: I did a few small contributions to Discourse a few months ago, but never got more deeply involved than that.)
VM will always be shit at compilation tasks. All of the existing solutions out there (including Docker) will lead to slower builds, especially if you need to build an embedded Linux distribution. Native is better, for those tasks.
The haters he’s replying to are literally splitting hairs.
Wow, they shave off 3 mins to run an interpreted language test suite. This is the ABSOLUTE state of the industry right now.
Except the filesystem. Which is what everyone seems to forget. And that's the big thing we're talking about here in reference to Ruby and tests being slow.
The file system is native too depending on what filesystem your host is running. If the host is running a Copy on Write filesystems like ZFS or BTRFS docker can use it natively.
Also you are a hero because you catch and fix production problems nobody else sees or understands because production is running linux.
If you have a problem on OSX you just get 500 people telling you to reset your pram, and really you just need to wait for the next osx update. You certainly can't rollback an osx update. Nobody knows whats wrong because nobody has access to the code. Even if you did - each update is millions of lines of code. It is crazy town.
Same for windows except there is not pram to reset. You just reboot until the next update I guess.