Hacker News new | past | comments | ask | show | jobs | submit login

With the rack being so quiet, I wonder if they would ever sell these in smaller versions for more local installations. Would be pretty niche, but it would be interesting. Same kinds of markets that ui.com tries to serve.



Yeah the rack is custom designed to be very quiet. And it was early mentioned in the article. I'm sure they thought it through... but why? Do they want people to store this in their office workspace? It seems like a niche to me? For smaller embedded devices like home appliance and the like, fanless or the option to have low or no fan speed (like modern GPUs) seems to make sense. Because they're stored in the household or office. But servers are not and if they are they're still stowed away or fans are replaced with less noisy ones. You want the fans as noisy as possible to generate as cold as possible hardware which helps to increase heat dissipation for improved performance before any throttling.


>That the rack is quiet wasn’t really deliberate (and we are frankly much more interested in the often hidden power draw that blaring fan noise represents).

They optimized for power. That roar of fan noise doesn't necessarily mean they're doing a better job cooling.


So for power and therefore noise. Why though? I can see how it is interesting given energy crisis plus not going to squeeze out max W for performance at the cost of energy. If that is the reason, then it is going to be more cost efficient than competitors on the longer term, at the cost of perhaps requiring more hardware to possibly scale up.


The short answer is density. Smaller fans are exponentially less efficient (that is, dissipate exponentially more power for the same airflow) and datacenters -- (and racks within them) -- have a power budget. The traditional rack-and-stack approach is held captive by a geometry that really doesn't make sense (19" wide x 1RU/2RU); if power is spent on fans (and in a traditional cram-down rack, somewhere between 20-30% of power is spent on fans), it can't be spent on compute. Density is critical, as DC footprint is often at a premium.


> 20-30% of power is spent on fans

Wow. It makes sense if I think about it; heat is the primary byproduct and therefore a typical server rack is chock-full of chunks of spinning copper coils and plastic fins. But still... wow.


Even more galling about that 20-30%: it is really, really hard to measure. Believe it or not, you can't measure the draw even in fans in a traditional rack-and-stack server enclosure (that is, they are either not on their own voltage regulators or not regulators that have PMBus support, or the PMBus support is not plumbed through the BMC -- or all of these). And these aren't the only fans! In a traditional rack-and-stack, each server has AC power supplies -- and these supplies also have fans! These supplies are often entirely dark: they don't have PMBus support at all -- and definitely don't provide insight into how much power they are dissipating in their own fans. (Just empirically: if you don ear protection and walk up to your local 1RU/2RU server under load, those power supply fans will be cranking, and you will feel a hot, stiff breeze from just the power supplies alone!) This is where all of these effects start to reinforce one another: the per-server conversion is stupidly inefficient and the geometry is stupidly inefficient (power supplies are forced to have even smaller fans than the servers!) -- all of which results in more draw, none of which is observable.

The Oxide rack is really the opposite in all of these dimensions: we do our AC/DC conversion in a power shelf and run DC on a busbar; the geometry of the sleds allows for very efficient 80mm fans; and (importantly!) every regulator is observable through PMBus and plumbed through our service processor and management network -- so you can see where it all goes.

And if you think I'm worked up about this problem, you should talk to sufferers of at-scale rack-and-stack who have been burned by this. ;)


I worked on designing computerized machinery in the past, and having sensors for as much stuff as possible just seemed like a good idea. We had quite a few current sensors for various components. Then we could do stuff like log in to the unit remotely when a customer had an issue, and diagnose based on if something was using too much current or zero current, and that beats having to send out a technician to go poke and prod. There are a lot of uses for that kind of data. And you've got a CPU right there to send it to.

> These supplies are often entirely dark

Good grief. What happens if a fan dies? The whole rack mysteriously shuts down? Or does the power supply just turn into a Minecraft lava block?


In the traditional rack-and-stack, if a power supply fan dies, the power supply will shut down -- which is why a server has two of them. (And indeed, in commodity servers, the fan in the power supply is what is most likely to fail.) All of this becomes a bit of a self-fulfilling prophesy: because there are so damned many of them, the power supplies in rack-and-stack servers are really not great -- which means they are more likely to fail. This is all sidestepped by having conversion consolidated in the rack, running high voltage (54V) DC up and down the busbar that is converted by each sled. (That converter throws off heat -- but can then leverage the high quality fans in the sled.)


Thanks for the explanation.

Did you consider other or additional options such as using rest warmth of DC to provide heating for neighboring city, or Frore AirJet?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: