There was a point in history when the total amount of digital data stored worldwide reached 1TiB for the first time. It is extremely likely this day was within the last sixty years.
And here we are moving that amount of data every second on the servers of a fairly random entity. We not talking of a nation state or a supranatural research effort.
That reminds me of a calculation I did which showed that my desktop PC would be more powerful than all of the computers on the planet combined in like 1978 :D
Haha.. imagine taking it back to 1978 and showing how it has more computing power than the entire planet and then telling them that you mostly just use it to find that thing you lost under the couch :D
Must be much more than 20ish years, some 2400 ft reels in the 60s stored a few megabytes, you only need 100 000s of those to reach a terabyte. https://en.wikipedia.org/wiki/IBM_7330
> a single 2400-foot tape could store the equivalent of some 50,000 punched cards (about 4,000,000 six-bit bytes).
At this point you only need a few ten thousand reels in existence to reach a terabyte. So I strongly suspect the "terabyte point" was some time in the 1960s.
Those numbers seem reasonable in that context. I first started using BitTorrent around that time as well, and it wasn't uncommon to see many users long-term seeding multiple hundreds of gigabytes of Linux ISOs alone.
Here’s another usage scenario with data usage numbers I found a while back.
> A 2004 paper published in ACM Transactions on Programming Languages and Systems shows how Hancock code can sift calling card records, long distance calls, IP addresses and internet traffic dumps, and even track the physical movements of mobile phone customers as their signal moves from cell site to cell site.
> With Hancock, "analysts could store sufficiently precise information to enable new applications previously thought to be infeasible," the program authors wrote. AT&T uses Hancock code to sift 9 GB of telephone traffic data a night, according to the paper.
That’s pretty cool. I remember someone on that repo from while back and was surprised to see their name pop up again. Thanks for archiving this!
Corinna Cortes et al wrote the paper(s) on Hancock and also the Communities of Interest paper referenced in the Wired article I linked to. She’s apparently a pretty big deal and went on to work at Google after her prestigious work at AT&T.
Hancock: A Language for Extracting Signatures from Data
And here we are moving that amount of data every second on the servers of a fairly random entity. We not talking of a nation state or a supranatural research effort.