> When you're deploying VMs, which is the use case here, the substrate OS becomes significantly less important. Those VMs will mostly just be linux.
Now you need to know both the OS they chose and the OS you chose...
(No, I don't believe it'll be 100% hands-off for the host. This is an early stage product, with a lot of custom parts, their own distributed block storage, hypervisor, and so on.)
This true for other hypervisors too. Enterprises are still paying hundreds of millions to VMware, who knows what's going on in there?
I wouldn't have picked Opensolaris, but it's a lot better than other vendors that are either fully closed source, or thin proprietary wrappers over Linux with spotty coverage and you're not allowed to touch the underlying OS for risk of disrupting the managed product.
What's more important is that the team actually knows Illumos/Solaris inside out. You can work wonders with a less than ideal system. That said, Illumos is of high quality in my opinion.
Seems risky considering how small of a developer pool actively works on illumos/Solaris. The code is most definitely well engineered and correct, but there are huge teams all around the world deploying on huge pools of Linux compute that have contributed back to Linux.
They had a bug in the database they are using that was due to a Go system library not behaving correctly specifically on illumos. They've got enough engineering power to deal with such a thing but damn..
Linux grew up in the bedrooms of teenagers. It was risky in the era of 486 and Pentiums. The environment and business criticality of a $1-2M rack-size computer is quite different.
I had similar thoughts about VMware (large installations) back in the day. Weird proprietary OS to run other operating systems? Yet they turned out fine.
This appears to be a much better system than VMware, is free as in software, and it builds upon a free software operating system with lineage that predates Linux.
I say this in the most critical way possible, as someone who has built multiple Linux-based "cloud systems", and as a GNU/Linux distribution developer: I love it!
It was totally a risky choice for companies in the 1990s and early 2000s to put all their web stuff onto Linux on commodity hardware instead of proprietary Unix or Windows servers. Many did it when their website being up was totally mission critical. Lots did it on huge server farms. It paid off very quickly but it's erasing history to suggest that it didn't require huge amounts of guts, savvy and agility to even attempt it.
Indeed, for me GNU/Linux was always a cheap way to have UNIX at home, given that Windows NT POSIX support never was that great.
The first time I actually saw GNU/Linux powering something in production was in 2003, when I joined CERN and they were replacing their use of Solaris, and eventually alongside Fermilabs came up with Scientific Linux in 2004.
Later at Nokia, it took them until 2006 to consider Red-Hat Linux a serious alternative to their HP-UX infrastructure.
Completely tangential, but this reminds me of an interview I had for my first job out of college in 1995. I mentioned to the interviewer that I had some Linux experience. "Ah, Linux" he said. "A cool little toy that's gonna take over the world".
In hindsight of course it was remarkably prescient. This from a guy at a company that was built entirely around SGI at the time.
This is a skewed view - the critical piece that made Linux "enterprise-ish" was the memory management system that was contributed by IBM, part of the SCO lawsuit
Now you need to know both the OS they chose and the OS you chose...
(No, I don't believe it'll be 100% hands-off for the host. This is an early stage product, with a lot of custom parts, their own distributed block storage, hypervisor, and so on.)