Similar in some ways different in others. But in terms of not being a PC architecture. Yes it is. But in many other ways its not at all like a Mainframe.
It's similar to hyperscale infrastructure — it doesn't matter as long as it looks like a PC architecture from the OS running inside a VM. The layers and layers of legacy abstraction firmware, BMC, drivers BIOS, hypervisors you get with a typical on premise Dell/HP/SuperMicro/... server motherboard are responsible for a cold start lasting 20 minutes, random failures, weird packet loss, SMART telemetry malfunctions, etc.
This is the type of "PC architecture" cruft many customers have been yearning to ditch for years.
I’m not in the bare metal/data center business anymore at the moment, but I was for more or less the last 25 years. I never had such issues. Maybe I was just lucky?
Maybe you were. :-) And maybe this is not for you or me (I haven't contacted their sales, yet) it's not for everyone.
Personally, I have always been annoyed that the BIOS is clunky and every change requires a reboot, taking several minutes. As computers got faster over the years, this has gotten worse, not better. At the core of cloud economics is elasticity: don't pay for a service that you don't use. Wouldn't it be great to power down an idle server, knowing that it can be switched on seconds before you actually need it?
> Wouldn't it be great to power down an idle server, knowing that it can be switched on seconds before you actually need it?
considering you would still need to boot the VMs then, once the Oxide system is up, I’m not sure if this is such a big win.
And at a certain scale you’d probably have something like multiple systems and VMware vMotion or alternatives anyway. So if the ESXi host (for example) takes a while to boot, I wouldn’t care too much.
And, economics of elasticity - you’d still have to buy the Oxide server, even if it’s idle.
> considering you would still need to boot the VMs then, once the Oxide system is up, I’m not sure if this is such a big win.
To be honoust, I'm using containers most of the time these days but even the full blown windows VMs I'm orchestrating boot in less than 20s, assuming the hypervisor is operational. I think that's about on par with public cloud, no?
> [...] vMotion [...] ESXi.
Is VMware still a thing? Started with virsh, kvm/qemu a decade ago and never looked back.
> And, economics of elasticity - you’d still have to buy the Oxide server, even if it’s idle.
That's a big part of the equation indeed. This is where hyperscalers have an advantage that Oxide at some point in the future might enjoy as well. Interesting to see how much of that they will be willing to share with their customers...
Re VMware, it’s certainly still a thing in enterprise environments. Can kvm do things like live migration in the meantime? For me it’s the other way round, haven’t looked into that for a while ;)
How do you mean Oxide might have that advantage as well in the future? As I understand, you have to buy hardware from them?
Ah yes live migration, off course. We design "ephemeral" applications that scale horizontally and use load balancer to migrate. With 99% of traffic serviced from CDN cache updates and migrations have a very different set of challenges.
As to your question, I meant to say that as volumes and scales economies increase they can source materials far cheaper than regular shops. Possibly similar to AWS, gcs, Azure, akamai etc. It would be nice if they were able and willing to translate some of those scales economies into prices commensurate with comparable public cloud instances.
If you want more insight into all of the things that normally run on "PC architecture" - the 2.5 other kernels/operating systems running underneath the one you think you're running - https://www.youtube.com/watch?v=mUTx61t443A
Every PC has millions of lines of firmware code that often fails and causes problems. Case and point, pretty much all hyperscalers rip out all the traditional vender firmware and replace it with their own, often partially open source.
The BMC is often a huge problem, its a shifty architecture and extremely unsafe. Meta is paying for u-bmc development to have something a bit better.
Doing things like rack attestation of a whole racks worth of firmware if stacked PC is incredibly hard and so many companies simply don't do it. And doing it with the switch as well is even harder.
Sometimes the firmware runs during operations and takes over your computer causing strange bugs, see SMM. If there is a bug anywhere in that stack, there are 10 layers of closed source vendors that don't care about you.
Costumers don't care if its PC or not, but they do care if the machine is stable, the firmware is bug free and the computer is safe and not infected by viruses. Not being a PC enables that.
I imagine they are aware that this isn't a solution for many customers. A John Deere tractor makes a poor minivan. This isn't for you. That's fine. It's not for me either. That's ok. I don't need to poo-poo their efforts and sit and moan about how it's not for me.