Hacker News new | past | comments | ask | show | jobs | submit login

800Gbps via OSFP and QSFP-DD are already a thing. Multiple vendors have NICs and switches for that.



16x PCIe 4.0 is 32GB/s 16x PCIe 5.0 should be 64 GB/s, how is any computer using 100 GB/s ?


I was talking about Gigabit/s, not Gigabyte/s.

The article however actually talks about Terabyte/s scale, albeit not over a single node.


800 gigabits is 100 gigabytes which is still more than PCIe 5.0 16x 64 gigabyte per second bandwidth.

You said there were 800 gigabit network cards, I'm wondering how that much bandwidth makes it to the card in the first place.

The article however actually talks about Terabyte/s scale, albeit not over a single node.

This does not have anything to do with what you originally said, you were talking about 800gb single ports.


I'm not aware of any 800G cards, but FYI a single Mellanox card can use two PCIe x16 slots to avoid NUMA issues on dual-socket servers: https://www.nvidia.com/en-us/networking/ethernet/socket-dire...

So the software infra for using multiple slots already exists and doesn't require any special config. Oh and some cards can use PCIe slots across multiple hosts. No idea why you'd want to do that, but you can.


Yes, apparently I was mistaken about the NICs. They don't seem to be available yet.

But it's not a PCIe limitation. There are PCIe devices out there which use 32 lanes, so you could achieve the bandwidth even on PCIe5.

https://www.servethehome.com/ocp-nic-3-0-form-factors-quick-...


can you show me a 800G NIC?

the switch is fine, I'm buying 64x800G switches, but NIC wise I'm limited to 400Gbit.


fair enough, it seems I was mistaken about the NIC. I guess that has to wait for PCIe 6 and should arrive soon-ish.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: