This is indeed sad. We are/were a customer, and one of the earliest edge computing customers. For a long time they've been the only provider truly offering the ability to run native code at the edge. But they've not really taken advantage of their early lead, so others have caught up. Also, reliability and capacity problems have become very commonplace in the past year or so. Hope the team has a safe landing elsewhere.
Great write-up! One question I had was around the use of keepalives. There's no mention in the article of whether keepalives were used between the client and reverse proxy, and no mention of whether it was used between the reverse proxy and backend.
I know Nginx doesn't use keepalives to backends by default (and I see it wasn't setup in the optimised Nginx proxy config), but it looks like Caddy does have keepalives enabled by default.
Perhaps that could explain the delta in failure rates, at least for one case?
They are distinct in Go. The standard library uses "HTTP keep-alive" to mark connections as idle based on most recent HTTP request, whereas TCP keep-alive checks only ACKs.
That 15Mb/s figure for 4K is out of date by a couple of years. They previously targeted a fixed average bitrate of 15.6Mb/s. They now target a quality level, as scored by VMAF. This makes their average bitrate for 4K variable, but they say it has an upper bound of about 8Mb/s.
See https://netflixtechblog.com/optimized-shot-based-encodes-for...
More likely that they just wanted to keep it at that as a kind of worst case scenario. If you meet their recommended spec, there should be no way you will have issues.
The same author has been reporting diligently on Pollen, and has said he will do a write-up on the collapse there. I will be very interested to read that.
I'm not so sure. If I load a saved map in Google Maps (e.g. someone has saved a route with markers and shared it with me), and then I go offline whilst viewing it, Google Maps on Android will show an error after a while and I'll lose the route and markers entirely. This occurs even if I've marked the area as an offline map. I guess it's just a case that they treat the saved routes in a different way. But it's really annoying for my use case (which is finding crewing points for long distance running events in the middle of nowhere with no mobile coverage).
There isn't an "official data source" for RDOF evaluation. ISPs are required to carry out measurements in the markets where they've accepted funds. Measurements are carried out on a sampled subset of their customers, and a large set of frequent measurements has to be produced from each customer. Measurements have to be conducted to servers in specific locations (you can't just test to a server two miles down the road inside the ISP's network). The requirements are pretty rigorous and not straightforward to meet (e.g. if a customer switches their router off for a day, then that can disqualify their measurements entirely - you need a sample every hour, every day for at least a week in the quarter). They need to submit these measurements to USAC at the end of the quarter, to demonstrate that at least X% of measurements met the target of Y (it varies by metric).
Generally speaking, crowdsourced measurements (whereby you have loads of users but each running very few tests) aren't well suited to these requirements.
My experience with Paypal is the opposite. On multiple occasions I've had someone send me money on Paypal, but then it gets held up in checks and verification for 2-4 weeks, during which time I cannot access it. Sometimes it is instant though. There seems to be no pattern to it. This lack of certainty discourages me from using Paypal to receive payments in the future as I now consider it as a risky, slow, last-resort option.