Thanks for your interest in chatting about our work.
The replay protection is intended to prevent clients from tricking the server into sending more than permitted by TCP congestion control. I.e., it protects the network and other clients from DoS.
Protecting TCP congestion control prevents a couple of attacks:
* a selfish client may consume more than its fair share of the bandwidth
* a malicious client may use the server to amplify a DoS attack. E.g., in the absence of replay protection, an attacker limited to only 1 Mb/s of bandwidth could cause the server to consume 10-20 Mb/s (other examples of amplification include smurf attacks; some TCP implementations from the 1990s could also be tricked). With replay protection, the attacker would actually have to have the full 10-20Mb/s of bandwidth to cause that level of damage.
Note that the HMAC protects against related attacks that are based on spoofing the client IP.
The amount of state stored at the server is proportional to the bandwidth. E.g., a server with a fatter pipe will need larger bloom filters to handle the higher packet rate. In Trickles, the amount of bandwidth-proportional state is mathematically clean to compute -- typical Bloom filter collision equations. By comparison, TCP will hold some fixed overhead per connection, consisting of TCB (TCP control block, for congestion control state) and socket/fd structs. Every TCP connection also buffers a variable amount of sent but unacknowledged data (proportional to window size).
The fixed per-connection overhead of TCP alone requires asymptotically more server-side state than Trickles. Since the size of TCP send buffers varies according to window size, which is determined by protocol dynamics, it’s tricky to construct a model of how much server state will be consumed by socket buffers. In our experiments, the socket buffers dominated server-side memory consumption.
The replay protection is intended to prevent clients from tricking the server into sending more than permitted by TCP congestion control. I.e., it protects the network and other clients from DoS.
Protecting TCP congestion control prevents a couple of attacks: * a selfish client may consume more than its fair share of the bandwidth * a malicious client may use the server to amplify a DoS attack. E.g., in the absence of replay protection, an attacker limited to only 1 Mb/s of bandwidth could cause the server to consume 10-20 Mb/s (other examples of amplification include smurf attacks; some TCP implementations from the 1990s could also be tricked). With replay protection, the attacker would actually have to have the full 10-20Mb/s of bandwidth to cause that level of damage.
Note that the HMAC protects against related attacks that are based on spoofing the client IP.
The amount of state stored at the server is proportional to the bandwidth. E.g., a server with a fatter pipe will need larger bloom filters to handle the higher packet rate. In Trickles, the amount of bandwidth-proportional state is mathematically clean to compute -- typical Bloom filter collision equations. By comparison, TCP will hold some fixed overhead per connection, consisting of TCB (TCP control block, for congestion control state) and socket/fd structs. Every TCP connection also buffers a variable amount of sent but unacknowledged data (proportional to window size).
The fixed per-connection overhead of TCP alone requires asymptotically more server-side state than Trickles. Since the size of TCP send buffers varies according to window size, which is determined by protocol dynamics, it’s tricky to construct a model of how much server state will be consumed by socket buffers. In our experiments, the socket buffers dominated server-side memory consumption.
- Alan Shieh