Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I know price will vary wildly based on how many you’re buying, but does anyone have the roughest ballpark for how much it would cost to buy one (1), or like two?


They mentioned it really quickly in their Oxide an Friends podcast but, IIRC, prices start at $500k. Some of the audience asked if they were going to do a smaller configuration like half or quarter rack. And they said they were looking into it but weren't sure the of the business case.


> They mentioned it really quickly in their Oxide an Friends podcast but, IIRC, prices start at $500k. Some of the audience asked if they were going to do a smaller configuration like half or quarter rack. And they said they were looking into it but weren't sure the of the business case.

That strikes me as being in the right ballpark, but it's going to be tough to swallow since that's the lowest level of granularity.

For most orgs you'd be left paying for a lot of excess capacity you couldn't immediately put to use as you migrate workloads in. I guess in ~4 years once you reach steady state and you're retiring / replacing these things it all works out, but if you're migrating from vmware or something else in a traditional blade/chassis world it's not like you can just wave a magic wand and move $500k worth of compute over to this thing at once.

If you're green fielding something, that's a lot of cash to sink in on compute you may not need for some time in the future. Never mind your DR site(s) also needing that much...


So the real question is whether 1 Oxide rack can outcompete 2 or 3 racks of normal commodity hardware.

Or provide enough white glove after-sales support and written guarantees to peel away low end mainframe customers at a fraction of the price.


> So the real question is whether 1 Oxide rack can outcompete 2 or 3 racks of normal commodity hardware.

Given their management plane/API:

* https://docs.oxide.computer/api/guides/responses

the performance may be about the same, and CapEx as well (or maybe a little higher), but OpEx could be where you make it up in large(r)-scale operations.

And space efficiency is also not to be sneezed at: for some operations DCs/compute can be place anywhere because latency is that big of a deal, but in other places you need to be close to certain things (trading), and real estate can get expensive.


In the spec sheet it looks like they have options with 16, 24 or 32 "compute sleds" (servers?).


Aka, a blade. Except the dimension is an odd one, 2u/2.

See pictures: https://pbs.twimg.com/media/FfT7MHoUoAE90QZ?format=jpg&name=...


I don’t know anything about buying servers, is that expensive?


It depends on how many servers you put in a rack. It's been years now I did this kind of work but I would say that an average rack with 20-25U in computing, 5-10 in storage and 5 in networking will cost you 300k$ easily. I'm pretty sure that Oxide will be more an Apple-esque experience, also on the price side, so a "normal" rack giving the same performances will be cheaper but if you want Oxide you are looking for other features beside the pure HW.


At the leading edge, the configuration you just described is probably more like $800k


It's very expensive hardware. I think they are trying to bring TCO down on the operation side with better control plane. I work in hyperconverged systems and it's a bunch of tradeoffs. Nothing I've configured has approached $500k so their control plane and OS has to make a really good show of why this 2x expensive cabinet is better than rack and stack Dell.


Let's just say I hope I am interpreting something incorrectly, because if 500k is the minimum price and you match it to the minimum configuration found here:

https://oxide.computer/product/specifications

Then yeah, it's ridiculously expensive.

That said compared to competitors it's in the right ballpark, but I have no idea how companies manage to spend so much money for this stuff. I am the founder of my own tech startup and I remember when I was looking at storage solutions and building out computing clusters there were companies charging absolutely insane prices.

I literally just spent about a week of my own time studying and learning as much as I could about it on my own and ended up building out my own custom solution for about 20-25% of the price these other companies were charging. I remember hearing people trying to scare me out of it saying if I did my own solution I'd need to hire full time operations people, and I'd always have to worry about things breaking and maintenance or headaches and nightmares etc...

It's been over 10 years now and absolutely no headaches, no nightmares, and very very minimal maintenance is needed.


Tell me more!


Yeah sure, my process to learn was literally I went down to a local computer store, I bought a cheap desktop computer and 6 cheap hard drives, an HBA, a RAID controller, a bunch of cables, went back to the office and installed Ubuntu Server onto it and practiced several "skills", like how to setup automated backups using rsync, how to physically install these components, how to install mdadm for software RAID etc... comparing software RAID to using a RAID controller. I setup several drills for each task involving various types of failures and how to manage them to the point that it was part of muscle memory.

Some drills I practiced were setting up RAID 10 and on hardware failure having an email sent to me, so I went through the process of getting RAID 10 working using 4 of the drives, and then I would physically pull a hard drive out of the system as it was running to simulate a failure, and then I swapped in a new hard drive and ensured that the RAID rebuild process took place.

Once I was confident enough, I went to thinkmate.com and bought two JBOD expansion chassis each of which supported 78 drives and filled each of them up with 5 TB Seagate drives to get a total of 390 TB. The JBODs are managed by a 1U server running Ubuntu Server and software RAID using mdadm. I also bought 8 compute servers that were considered high density and could fit in 4U. I got a Cisco router for Internet and I networked everything within the data center together using a used Infiniband switch that I bought off of ebay. I also got a KVM so I could remotely access all of the systems. If there's one thing that I would change today, it would be to use ZFS.

I remember comparing the cost of doing this DIY setup to some other premade solutions like EMC and the cost was astronomical, like 3-4x what I ended up having to pay. I also even remember watching Linus Tech Tips and Level1Techs on Youtube and they both had good content about how they managed their storage that was fairly reasonable but still slightly on the pricey side, nevertheless I learned quite a bit from it.

At any rate, the bottom line is that it's not trivial to learn all this stuff by any means and I remember having some serious frustrations due to just how bad and demoralizing some error messages can be, but it's not thaaaat difficult either and in my situation my company is self funded, no venture capital or outside investors of any kind so every dollar my company spends is a dollar out of my own pocket. You better believe when I'm starting my business I'm not going to just blow 100s of thousands of dollars extra unless I absolutely have to.

About 5 years ago I managed to use my own storage system to ditch Dropbox in favor of Nextcloud, which is just leaps and bounds superior. I remember I got so frustrated with Dropbox because I wanted to just do something as simple as create a sub-directory and grant only read only permissions to some accounts. I also remember wanting to do some simple things like create a fake account with very limited permissions that could be used in our conference room for presentations or demos but the only way to do that with Dropbox would be to create an actual account and have to pay the full price for it.

Nextcloud works amazingly well, has all kinds of cool plugins, and gives our organization a lot of flexibility that Dropbox doesn't and I can make as many accounts as I feel like for whatever reason I feel like.


400 TB but no ZFS, and lots of hardware and software but no support.

And not a single mention of backups.

Good luck with that.


It's since grown over the past 10 years to about 2 PB of storage.

I mentioned I use rsync for backups.


rsync to where?


That's a lot more involved. We're a quant firm that have servers colocated globally at various exchanges, so our data is distributed across a lot of systems. The main and original purpose of this storage system is to provide one centralized repository of all of this global data so that we can run backtests and perform analytics locally as opposed to having to hit our Australian system when we need ASX data, and then hit our German system when we need Xetra data, etc etc...

So different backups happen in varying directions, with the colocated servers using the main storage system to back themselves up, and the main storage system in turn backing portions of itself onto the colocated servers.

Not everything gets backed up, however, and most of what does get backed up can be compressed quite significantly. We mostly just back things up that can't be recovered from third parties or that are proprietary to our organization.


Follow up question would be how much are equivalent solutions from established rack providers?


I would assume a million dollar. Ballpark.


I mean, the raw hardware surely costs around 100k, and no way that it costs more than ten million, so you're always going to be right with that "ballpark" qualification


I vaguely remember listening to their pod cast and my impression was that it starts around 500k.

Add few add ons, service contract, support etc so would be there.

I want such a rack but power draw on average is listed around 12kwh. Unbelievable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: