It's similar to hyperscale infrastructure — it doesn't matter as long as it looks like a PC architecture from the OS running inside a VM. The layers and layers of legacy abstraction firmware, BMC, drivers BIOS, hypervisors you get with a typical on premise Dell/HP/SuperMicro/... server motherboard are responsible for a cold start lasting 20 minutes, random failures, weird packet loss, SMART telemetry malfunctions, etc.
This is the type of "PC architecture" cruft many customers have been yearning to ditch for years.
I’m not in the bare metal/data center business anymore at the moment, but I was for more or less the last 25 years. I never had such issues. Maybe I was just lucky?
Maybe you were. :-) And maybe this is not for you or me (I haven't contacted their sales, yet) it's not for everyone.
Personally, I have always been annoyed that the BIOS is clunky and every change requires a reboot, taking several minutes. As computers got faster over the years, this has gotten worse, not better. At the core of cloud economics is elasticity: don't pay for a service that you don't use. Wouldn't it be great to power down an idle server, knowing that it can be switched on seconds before you actually need it?
> Wouldn't it be great to power down an idle server, knowing that it can be switched on seconds before you actually need it?
considering you would still need to boot the VMs then, once the Oxide system is up, I’m not sure if this is such a big win.
And at a certain scale you’d probably have something like multiple systems and VMware vMotion or alternatives anyway. So if the ESXi host (for example) takes a while to boot, I wouldn’t care too much.
And, economics of elasticity - you’d still have to buy the Oxide server, even if it’s idle.
> considering you would still need to boot the VMs then, once the Oxide system is up, I’m not sure if this is such a big win.
To be honoust, I'm using containers most of the time these days but even the full blown windows VMs I'm orchestrating boot in less than 20s, assuming the hypervisor is operational. I think that's about on par with public cloud, no?
> [...] vMotion [...] ESXi.
Is VMware still a thing? Started with virsh, kvm/qemu a decade ago and never looked back.
> And, economics of elasticity - you’d still have to buy the Oxide server, even if it’s idle.
That's a big part of the equation indeed. This is where hyperscalers have an advantage that Oxide at some point in the future might enjoy as well. Interesting to see how much of that they will be willing to share with their customers...
Re VMware, it’s certainly still a thing in enterprise environments. Can kvm do things like live migration in the meantime? For me it’s the other way round, haven’t looked into that for a while ;)
How do you mean Oxide might have that advantage as well in the future? As I understand, you have to buy hardware from them?
Ah yes live migration, off course. We design "ephemeral" applications that scale horizontally and use load balancer to migrate. With 99% of traffic serviced from CDN cache updates and migrations have a very different set of challenges.
As to your question, I meant to say that as volumes and scales economies increase they can source materials far cheaper than regular shops. Possibly similar to AWS, gcs, Azure, akamai etc. It would be nice if they were able and willing to translate some of those scales economies into prices commensurate with comparable public cloud instances.
This is the type of "PC architecture" cruft many customers have been yearning to ditch for years.