Thankyou for helping me understand that I don't actually understand how TCP works :) I had no idea ACKs apply to the last n bytes as a group - although now I think about it, it makes a lot of sense that no other approach would be viable (even with 80s/90s era line rates).
I want to design a protocol capable of withstanding both high and low error rate scenarios; everything from usage on 64kbps satellite to flaky home Wi-Fi to wobbly 4GX cellular performance around tunnels and buildings. The (slightly pathological) application I have in mind (mostly as a theoretical test) involves video and interactivity, so basically requires simultaneous high throughput and low latency (essentially worst case scenario). My goal is to build something that responds well to that, perhaps by being able to have its performance metrics retuned on the fly.
I read somewhere a while ago about how TCP is a poor general-purpose catch-all protocol, and that eschewing it where possible (cough where there aren't any proxies) will generally boost performance (if the replacement protocol is well-designed).
It would be awesome if I could to assign significance to and act on each incoming network packet. This would mean I wouldn't need to wait for enough packets to come through and be reassembled first. So packets could arrive out of order or get dropped and I'd be able to recover the data from what did make it through instead of throwing what I did receive away and just starting over.
Yes, a lot of work; I'm trying to figure out if it's worth it depending on the situation.
My original musing was about TCP ACK. If I'm following correctly, I now understand that apparently TCP ACKs apply to 728KB of data (1492 * 500 = 746000), so if a failure occurs... does 728KB of data get resent?! I presume not where the link is slow or congested or has a high error rate, but if that's still happening where a lot of data is being sent, that still feels quite high to me.
Thanks for replying; you helped me properly think through my idea and realize it would never actually work: I hadn't realized that I was assuming packets would arrive in order. If I use a 64-byte string to represent a 512-bit bitmask of ok/fail checksum tests, that would only work if both sides of the link are considering the same set (and sequence order) of packets.
Another thought, with far fewer moving parts (/ chances for imposition/assumption...), was to simply report a running success/failure percentage, with the remote end using that information to get as close as possible to 0% errors. That doesn't narrow down what needs retransmitting though. Back to the drawing board, then...
I have no delusions of reinventing TCP/IP, and I'm not even interested in reinventing the wheel. I do find networking very fun, but I guess I'm still working on understanding how it works, building an accurate mental model, and (in particular) honing/tuning the (very important) sense of knowing what solution approaches will work and what will fail.
When you have no idea about something, time spent envisioning how to improve it is time entirely wasted. "Hmm, I wonder if elephants are used to pick chocolate bars from tall trees. Maybe we could improve production by creating a type of stilts for elephants that makes it easier to reach upper branches. Ah, but when the elephants fly home to Canada, would the stilts interfere with their wings?".
TCP has Window Scaling, allowing high bandwidth links to have up to 2GB or so on the wire, and Selective Acknowledgement, allowing recipients to acknowledge ranges of correctly received packets if some are lost. I picked 500 entirely out of the air, actual numbers will vary enormously.
A TCP/IP "stack" automatically tunes the settings for each connection, if your stack is modern it will do a pretty good job. Work on improving this automatic tuning in consideration of both theory and real world practical experience is an ongoing topic of network engineering.
I non-rarely have links with ~0.1% packet loss. If you get even just that much packet loss, and a long-distance link, you _very_ quickly drop below 10 Mbit/s, if the other side can't do fancy selective ACK and similar additions. While I might be able to do something against this, as long as I control one node on both sides of the lossy channel, I can compensate with a little bit of FEC. I might even be able to operate lossless, using a fountain code to add more parity for each data bundle as long as my recipient node continues asking my sending node to supply more parity.
You are right though, for unicast networks there is little use of L3/L4 FEC. Maybe with the age of IPFS and other content-addressed networks we will come back to broadcast-FEC networks, like those used back when for 3GPP multicast services like e.g. TV.
I want to design a protocol capable of withstanding both high and low error rate scenarios; everything from usage on 64kbps satellite to flaky home Wi-Fi to wobbly 4GX cellular performance around tunnels and buildings. The (slightly pathological) application I have in mind (mostly as a theoretical test) involves video and interactivity, so basically requires simultaneous high throughput and low latency (essentially worst case scenario). My goal is to build something that responds well to that, perhaps by being able to have its performance metrics retuned on the fly.
I read somewhere a while ago about how TCP is a poor general-purpose catch-all protocol, and that eschewing it where possible (cough where there aren't any proxies) will generally boost performance (if the replacement protocol is well-designed).
It would be awesome if I could to assign significance to and act on each incoming network packet. This would mean I wouldn't need to wait for enough packets to come through and be reassembled first. So packets could arrive out of order or get dropped and I'd be able to recover the data from what did make it through instead of throwing what I did receive away and just starting over.
Yes, a lot of work; I'm trying to figure out if it's worth it depending on the situation.
My original musing was about TCP ACK. If I'm following correctly, I now understand that apparently TCP ACKs apply to 728KB of data (1492 * 500 = 746000), so if a failure occurs... does 728KB of data get resent?! I presume not where the link is slow or congested or has a high error rate, but if that's still happening where a lot of data is being sent, that still feels quite high to me.
Thanks for replying; you helped me properly think through my idea and realize it would never actually work: I hadn't realized that I was assuming packets would arrive in order. If I use a 64-byte string to represent a 512-bit bitmask of ok/fail checksum tests, that would only work if both sides of the link are considering the same set (and sequence order) of packets.
Another thought, with far fewer moving parts (/ chances for imposition/assumption...), was to simply report a running success/failure percentage, with the remote end using that information to get as close as possible to 0% errors. That doesn't narrow down what needs retransmitting though. Back to the drawing board, then...
I have no delusions of reinventing TCP/IP, and I'm not even interested in reinventing the wheel. I do find networking very fun, but I guess I'm still working on understanding how it works, building an accurate mental model, and (in particular) honing/tuning the (very important) sense of knowing what solution approaches will work and what will fail.