I've found many people either forget (or never knew) that pipes have a buffer, and that buffers are very useful when you are dealing with potentially bursty throughput.
In a previous team where we dealt with shipping around lots of ZFS snapshots (where both the performance of reading from disk can vary based on the ARC and filesystem metadata) and occasionally using VPN tunnels on public internet links, pv to get even larger pipe buffers than standard was often a huge performance equalizer. Readers read as fast as they can, writers write as fast as they can, and the buffer massages any variability in the middle.
It's not just the buffer, it is the PIPE. Specifically, pipelining that streams one file after another, without an idle period of waiting for the round-trip acknowledgements of one file receipt prior to sending the next. Using rsync over ssh has the same massive benefit for transferring many files, even if there is no existing data to make use of its differential transfer mechanisms.
(Those extra mechanisms are usually beneficial in WAN scenarios as well, except when you have high speed links, in which case blasting the full files may be faster than doing the IO to figure out comparisons and skip transfer.)
In a previous team where we dealt with shipping around lots of ZFS snapshots (where both the performance of reading from disk can vary based on the ARC and filesystem metadata) and occasionally using VPN tunnels on public internet links, pv to get even larger pipe buffers than standard was often a huge performance equalizer. Readers read as fast as they can, writers write as fast as they can, and the buffer massages any variability in the middle.
https://linux.die.net/man/1/pv