I loved Netflix when they had the DVD service and the recommendation competition because it actually suggested shows I would enjoy.
Once they started producing their own stuff, recommendations no longer worked: they just promoted whatever crap they produced themselves. And with that, trying to find a show I wanted to watch became so much effort that I canceled altogether. Same goes for all the other streaming services.
Internet/TV bills can be negotiated, but it is usually something you have to do annually and most people, rightly so, hate it. The companies make it hard to do, so most people would rather pay an extra $5-10 rather than spending an hour or two on the phone. After 5-10 years, those fee bumps really add up.
The only way to keep Internet/TV costs low is to threaten to cancel or switch every year, and actually be willing to do it. For some that isn't an option because there is only 1 provider, and others I've talked to hate that idea because you have to learn a new channel lineup. It's amazing how much people will pay to not be slightly inconvenienced.
Live sports and public television was kind of the last bastion in my mind, but the former is piecemeal being acquired by streaming the platforms and the latter is largely being put on the internet for free.
Stopping payment sounds good, but may not work for a couple of reasons:
1) if you have payment auto deducted from a bank account, getting that stopped is not always straightforward. My bank told me they couldn't actually block ACH transactions, and to reverse one, I had to file a complaint with the company initiating the ACH, wait 30 days until the next bank statement to verify that the company didn't reverse the ACH, then ask the bank again to reverse the ACH.
2) in this case, the guy had other ISPs, but it looks like they were all satellite or DSL, which have really high latency. High latency and packet loss are way bigger issues than throughput, although with the severity of outage described in the article, high latency with no hard outage might be a better trade-off.
3) if you stop paying and get your service cut off, and it's critical for you (remote work, etc), now you have to scramble
Backblaze erasure-codes customer data across 17 (I think) servers, so customer data is probably not accessible. Yes, it would be better if they zeroed the drive, but Google says that will take 14-30 hours for a 10TB drive.
For drives that implement an internal encryption key, it's faster (instantaneous) to reset the encryption key. It won't give you a zeroed drive, but one filled with garbage.
In many erasure coding systems, the first X sets of code are simply cleartext chunks.
This is also more efficient in the happy path since then no computation is needed to decode the data. It can be DMA'd straight from the drive to the network adapter with super low CPU utilisation even for Gbps of network traffic.
The earlier description is ambiguous (i.e., is it data of or about customers, and is that data cleartext), but it seems they believe they have a drive from Backblaze with a lot of cleartext files on it, and something involving customers.
> It contained terabytes of customer data, and a shit ton of cleartext files.
I'm the author of HashBackup. IMO, silent bitrot is not really a thing. I say this because every disk sector written has an extensive ECC recorded with it, so the idea that a bit can flip in a sector and you get bad data without an I/O error seems extremely unlikely. Yes, you could have buggy OS disk drivers, drive controllers, or user-level programs that ignore disk errors. And yes, you could have a bit flip on magnetic media causing an I/O error because the data doesn't match the ECC.
I believe that that using non-ECC RAM is a potential cause of silent disk errors. If you read a sector without error, then a cosmic ray flips a bit in RAM containing that sector, you now have a bad copy of the sector with no error indication. Even if the backup software does a hash of the bad data and records it with the data, it's too late: the hash is of bad data. If you are lucky and the hash is created before the RAM bit flip, at least the hash won't match the bad data, so if you try to restore the file, you'll get an error at restore time. It's impossible to recover the correct data, but at least you'll know that.
The good news is that if you backup the bad data again, it will be read correctly, and be different from the previous backup. The bad news is, most backup software skips files based on metadata such as ctime and mtime, so until the file changes, it won't be re-saved.
We are so dependent on computers these days, it's a real shame that all computers don't come standard with ECC RAM. The real reason for that is that server menufacturers want to charge higher prices to data centers for "real" servers with ECC.
I boot Finnix and use dd if=/dev/zero of=/dev/sdx to wipe out drives. Most drives can be wiped out overnight. That fable about needing multiple passes is not true:
Just as one data point, I have a diabetic friend on insulin and under a doctor's care who was put on a CGM and told by the doctor "if the meter reads 150 or higher, don't eat". Sometimes this meant not eating for a day. He lost 70 lbs in about a year and hugely reduce his insulin use.
The question is also for which list lengths the performance matters most. When sorting a few strings (<20), whether the algorithm uses 5 or 7 comparisons would usually not matter too much. So to find the optimal algorithm for a given situation, we would have to compute a weighted average by list length importance on the performances of the algorithm per list length.
Don't high performance systems have heuristics to decide what specific algorithms to use at runtime? It is not unimaginable to think that there could be a function dedicated to small vs large collections.
Assuming the results hold, someone has to decide if the additional complexity is worth the performance. For something like BLAS, go nuts. For Python standard library, maybe not.
I switched from Android to iOS because Google forced updates to my phone somehow, even though I had internet access disabled. I only used it as a phone: no email, web browsing, etc. My phone (Blu R2) was a few years old, and after the update, all kinds of stuff was broken. For example, zooming a picture would cause the messaging app to crash. So once that update was installed, I had to enable updates continuously to try to get back to a working phone. But instead, things just kept getting worse. I gave up and bought an iPhone XR on eBay for half retail price.
Most HN folks think diversity is a good thing, and I'm not saying it isn't, but it does have its disadvantages. In my case, I could probably buy new Android phones at least 3x more often than iPhones based on cost, but a lot of people (me) don't want to be fiddling with new phones every year or 2. It was apparent to me that Android updates are not tested thoroughly on older phones. I understand that would be hard because there is a huge variety of hardware, but it's a significant downside of Android IMO.
Once they started producing their own stuff, recommendations no longer worked: they just promoted whatever crap they produced themselves. And with that, trying to find a show I wanted to watch became so much effort that I canceled altogether. Same goes for all the other streaming services.
reply