Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If A does not receive 4, it cannot know B received 3. If B did not receive 3 it can assume A did not receive 2. If A did not receive 2 then B should assume the attack is off If A thinks B is not attacking then A should not attack either.


But in this scenario A did receive 4, so it can assume B received 3. And in this scenario B received 3.

Presumably A will keep sending 3 over and over using a timeout until it receives 4.


I think the major issue is in a design that relies on a final message being lost causing a decision change. It ends up unraveling everything. TCP works because the receiver accepts that the sender might not get an ACK.


And what happens if B attacks before A receives the 4 (B did receive the 3 after all)?


A attacks too, since, being a thinking human knows that he previously received 2.

I think most humans would attack with a reasonable degree of confidence if they had gotten at least one ack in the past


It's a bit hard to express this with plain English so maybe it makes sense to just write out the pseudo-code.

A: attack if acks from B > 0

B: attack if acks from A > 0

Let's look at the current

A --> B : Original message (not an ack)

A <-- B : Ack from B (A is now committed to attacking)

A -/> B : Ack from A is lost, B does not attack

Okay let's change it up

A: attack if acks from B > 1

B: attack if acks from A > 0

A --> B : Original message (not an ack)

A <-- B : Ack from B (A is not yet committed to attacking)

A --> B : Ack from A (B is committed to attacking)

A </- B : Ack from B is lost, A does not attack

The essential problem (which I failed to explain correctly) is that while you can always go from a fixed set of interactions and work out a protocol that would've worked for that particular interaction, you can't go the other direction (starting with a protocol and throwing arbitrary interactions at it).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: