Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Existing vendors will provide rack integration services and deliver a turn key solution like this. Also vendors of virtualization management software have partnerships with hardware suppliers and be happy to deliver fully integrated solutions if you're buying by the rack. The difference is in those cases you have flexibility in the design which seems to be missing here.

Proxmox and a full rack of Supermicro gear would not be as sophisticated, but end result is pretty much the same, with I imagine far far better bang for buck.

I like it, but it doesn't seem like a big deal or revolutionary in any way.



Those of us who've bought large "turn-key" solutions from Dell etc. have often discovered that it's actually just a cobbled-together bunch of things which may or may not work well together on a good day, depending on what you're trying to do. Just because it's all got the word "Dell" written on it, doesn't mean that the components were all engineered by people who were working together to build a single working system.

When it breaks, good luck!


Total agreement. Another point: Having the "Dell" name on the front doesn't give you a "throat to choke" as so many people seem to think is important. Unless you're very large scale then, at best, you can threaten them that they don't get your next business. You're certainly not going to get help.

You're no worse-off with Oxide from that perspective. Their open source firmware means that thr opportunity to pay somebody else to support you at least exists.


> that they don't get your next business.

Even small shops can use bad experience as leverage for credits and discounts, especially if the vendor has account managers. This is one of the (few) benefits of having a human involved in invoicing vs. self-serve.


Same is true of Oxide, it'll be up to actual experience to see how well it works. Oxide seems to have written their own distributed block storage system (https://github.com/oxidecomputer/crucible), have their own firmware, kernel and hypervisor forks, etc -- when any of that breaks, good luck!


> Oxide seems to have written their own ...

> when any of that breaks, good luck!

The premise is that you don't need luck, you can call Oxide. As you said, they wrote all of it, so they own all the interaction so they can diagnose all of it.

When I call Dell with a problem between my OS filesystem and the bus and the hardware RAID, there's at least three vendors involved there so Dell doesn't actually employ anyone that knows all of it so they can't fix it.

Sure, Oxide now needs to deliver on that support promise but at least they are uniquely positioned to be able to do it.


That's the same premise as with all "turn-key" solutions. If it didn't come with software support, it wasn't really turn-key.

The rest comes down to execution. Sure, we all have high hopes for Oxide. Sure, we all hate established players like Dell.


> That's the same premise as with all "turn-key" solutions. If it didn't come with software support, it wasn't really turn-key.

Just about any company will sell your company a support contract.

The more interesting question is, can they back it up with action when push comes to shove? I suspect most people have plenty of stories of opening support tickets with big name vendors that never get resolved. And through the grapevine you find out that they won't fix it because they can't fix it. They might not even have access to the source code or anyone on staff who has a clue about it because it came from who knows where. Sales is happy to sell you the support contract but it doesn't mean your problems can be fixed. BTDT.

From listening to the Oxide podcasts, my impression is that Oxide actually can technically fix anything in the stack they sell, which would make them vastly different from Dell et.al.


Skill-wise, yes for sure (except perhaps for storage -- I haven't heard them talk about that much). Bandwidth wise, though?

I used to work for a company targeting Fortune 500s. At that level of spend, when a client had a problem, somebody got on a plane. Only a fraction of those problems escalated all the way to R&D, which is where Oxide skills are. That's where VMWare etc are hard to beat.


The premise is that the bandwidth needed will be orders of magnitude less, because the engineering will be orders of magnitude better. The opportunity makes sense as we've long been climbing up the local maximum peak of enterprise sales driven tech behemoths built on a cobbled together mix of open source and proprietary pieces held together with bubblegum.

Can an engineering first approach break into the cloud market? Hard to say as enterprise sales is very powerful, and the numerous "worse is better" forces always loom large in these endeavours. That said, enterprise sales driven companies are fat, slow and complacent. Oxide is lean and driven, and a handful of killer use cases and success stories is probably enough to sustain them and could be the thin end of the wedge on long-term success. We can hope anyway.


> Proxmox and a full rack of Supermicro gear would not be as sophisticated, but end result is pretty much the same, with I imagine far far better bang for buck.

I think the question is how well they can do the management plane. Dealing with the "quirks" of a bunch of grey box supermicro stuff is always painful in one way or another. The drop shipped, pre-cabled cab setups are definitely nice but that's only a part of what Oxide is doing here. No cables and their own integrated switching sounds nice too (stuff from the big vendors like UCS is closer to this ballpark but also probably closer to the cost too).

I suspect cooling and rack density could be better in the Oxide solution too, not having to conform to the standards might afford them some possibilities (although that's just a guess, and even if they do improve there these may not be the bottlenecks for many).


> I think the question is how well they can do the management plane.

Docs:

* https://docs.oxide.computer/api/guides/responses

See perhaps "This repo houses the work-in-progress Oxide Rack control plane."

* https://github.com/oxidecomputer/omicron


> Existing vendors will provide rack integration services and deliver a turn key solution like this.

My experience with the likes of Dell is that they'll deliver it but they won't support it.

Sure, there's a support contract. And they try. But while they sell a box that says Dell, the innards are a hodgepodge of stuff from other places. So when certain firmware doesn't work with something else, they actually can't help because they don't own it, they're just a reseller.



It's a fair point! I would certainly trust the opinion of Bryan Cantrill over my own as well.


AWS outposts have been there in the market for a long time .. though I am sure there are differences but to say extisting cloud vendors were blind to on prem requirements is a stretch.


Also future datacenter builds are going to be focusing on specific applications which means specific builds. I think Nvidia has a much better chance here with their superpod than Oxide. The target use case is pretty unclear.

On-prem buyers are doing cost reduction and cost reduction targets things like, as one example, the crazy cost of GPU servers on the CSPs. Your run of the mill stuff is very hard to cost reduce.

You can see their sort of lack of getting it by using Tofino2 as their switch. That’s just a very bad choice that was almost certainly chosen for bad reasons.


can you elaborate a bit? What you're saying sounds pretty interesting but I'm too ignorant to read between the lines


You don't build a new greenfield compute pod because you want to, you do it because it makes sense. Making sense is about cost and non-cost needs like data gravity and regulatory issues.

The cost case only works for GPU heavy workloads which this isn’t - wrong chassis, wrong network, etc.

Tofino2 is the wrong choice because even when they made that choice it would have been clear that it’s doa. Intel networking has not been a success center in, well, ever. That’s a selection that could only have been made for nerd reasons and not sensible business goals alignment or risk mitigation.

When you make an integrated solution you’d better be the best or close to the best at everything. This does not seem to be the best at anything. I will grant that it is elegant and largely nicer than the hyper converged story from other vendors but in practical terms this is the 2000s era rack scale VxBlock from Cisco or whatever Dell or HPE package today. Marginally better blade server is not a business.

They also make a big deal and have focused on things no one who actually builds data center pods cares about.

I actually hope they get bought by Dell or HPE or SuperMicro. Those companies could fix what’s wrong here and benefit a lot from the attention to detail and elegance on display.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: