That's a neat idea but probably unworkable in practice. Container images need to be reliably available quickly; there is no appetite for the uncertainties surrounding the average torrent download
> That's a neat idea but probably unworkable in practice. Container images need to be reliably available quickly; there is no appetite for the uncertainties surrounding the average torrent download
Bittorrent seems to work quite well for linux isos, which are about the same size as containers, for obvious reasons.
IMO, the big difference is that, with bittorrent, it's possible to very inexpensively add lots of semi-reliable bandwidth.
Nobody is going to accept worrying about whether the torrent has enough people seeding in the middle of a CI run. And your usual torrent download is an explicit action with an explicit client, how are people going to seed these images and why would they? And what about the long tail?
We are talking about replacing the docker hub and the like, what people "should" be doing and what happens in the real world are substantially different. If this hypothetical replacement can't serve basic existing use cases it is dead at the starting line.
Archive.org does this with theirs. If there are no seeds (super common with their torrents—IDK, maybe a few popular files of theirs do have lots of seeds and that saves them a lot of bandwidth, but sometimes I wonder why they bother) then it'll basically do the same thing as downloading from their website. I've seen it called a "web seed". Only place I've seen use it, but evidently the functionality is there.
I'm pretty much convinced the people at Docker have explicitly made their "registry" not be just downloadable static files purely to enable the rent-seeking behavior we are seeing here...
Bit Torrent would beg to differ.