The network bandwidth between nodes is a bigger limitation than compute. The newest Nvidia cards come with 400gbit busses now to communicate between them, even on a single motherboard.
Compared to SETI or Folding @Home, this would work glacially slow for AI models.
No, the problem is that with training, you do care about latency, and you need a crap-ton of bandwidth too! Think of the all_gather; think of the gradients! Inference is actually easier to distribute.
Yeah, but if you can do topologies based on latencies you may get some decent tradeoffs. For example with N=1M nodes each doing batch updates in a tree manner, i.e the all reduce is actually layered by latency between nodes.
Of course i will want to be distracted by my AI but first it has to track everything i say and do. And i wouldn't mind if it talked some smack about my colleagues
reply