At this point, we need a service that "offers" an 8-bay (with 12TB? 14TB? drives) full with the whole ~80TB Anna's Archive. It's essentially all of human knowledge and to be frank it belongs to no one - rather...everyone.
People can store this at their house, keep it offline. Just to have these seeds of knowledge everywhere.
...I suppose LLM's trained on this data, essentially their model weights and tokenization are a much more efficient way of storing and condensing this 80TB archive?
When it comes to the evils (and goods) of copyright, it is hard to go wrong with Thomas Babington Macaulay's address to the House of Commons in 1841[1]:
"At present the holder of copyright has the public feeling on his side. Those who invade copyright are regarded as knaves who take the bread out of the mouths of deserving men. Everybody is well pleased to see them restrained by the law, and compelled to refund their ill-gotten gains. No tradesman of good repute will have anything to do with such disgraceful transactions. Pass this law: and that feeling is at an end. Men very different from the present race of piratical booksellers will soon infringe this intolerable monopoly. Great masses of capital will be constantly employed in the violation of the law. Every art will be employed to evade legal pursuit; and the whole nation will be in the plot. On which side indeed should the public sympathy be when the question is whether some book as popular as “Robinson Crusoe” or the “Pilgrim’s Progress” shall be in every cottage, or whether it shall be confined to the libraries of the rich for the advantage of the great-grandson of a bookseller who, a hundred years before, drove a hard bargain for the copyright with the author when in great distress? Remember too that, when once it ceases to be considered as wrong and discreditable to invade literary property, no person can say where the invasion will stop. The public seldom makes nice distinctions. The wholesome copyright which now exists will share in the disgrace and danger of the new copyright which you are about to create. And you will find that, in attempting to impose unreasonable restraints on the reprinting of the works of the dead, you have, to a great extent, annulled those restraints which now prevent men from pillaging and defrauding the living."
He was decrying the increase in term of copyright to life of the author + 50 years.
That is a powerful address, indeed. Good thing to know even in 1841 people saw what copyright would become today: an intolerable monopoly granted by the government, functionally infinite.
Seriously, this text is so great. I read the entire thing. It's nearly two hundred years old and contains everything one needs to know about copyright in 2024. Thank you for posting it.
That is true. However, it also has a staggering amount of duplicate data. I have _heard_ that if you search for most any particular book, you often get a dozen results of varying sizes and quality. Even for the same filetype. It's a hard problem to solve, but if we had something that could somehow pick the "best" copy of a particular title, for every title in the library, Anna could likely drop the zero herself.
As one of their blog posts explains that's by design, they download all versions of any file. The reasoning was that some worse quality video files will have subtitles or better audio than the high quality video.
Some filtering may be possible to automate but lots of the tasks involved will have to be manual. Like merging video and audio from different sources or syncing subtitles from another file.
Honestly, if I can't have the whole thing, I'm not going to bother mirroring a 1TB fragment that's worthless by itself to everybody except copyright attorneys.
As ndriscoll points out, the only feasible way to distribute an archive of this size is with physical hard drives. I sure wish they would find a reasonably-trustworthy way to offer that.
Most of the books are bloated PDFs. I'm slowly working on a project to reliably convert PDF to DjVu, which on average yields a highly readable document that's 33% of the original size on disk. The project is proving difficult, as the tooling for DjVu is quite moldy now, and often needs to be manually reviewed to ensure the file remains readable. Pdf2djvu exists, but it's highly unreliable, and thus can't be used in bulk. Other ebook formats are XML-based and tend to be similarly bloated due to the overhead of the markup. It's a hard problem with so little in the way of good file format choices.
That sounds like a pretty terrible idea, TBH. All of the best tooling is for PDFs, as you note, and storage will only get cheaper.
Ultimately that content is going to need to be represented as raw UTF-8 text and encoded images, so I don't see much upside to migrating it from one intermediate lossy file format to another.
1 PB of disk space would cost about $10K at this point in time. Not exactly unattainable. Looks like it would fit in a volume of space about the size of a standard refrigerator.
It doesn't seem reasonable to me to suggest that an average person would spend $10,000+ (and the time to maintain it) on a pirate archive, hence my comment.
On the other hand, contributing a TB or two to a torrent swarm is much more feasible for most people.
In any case, if you're okay with that, you should do it. Please report back in 6 months with how it's going.
In any case, if you're okay with that, you should do it. Please report back in 6 months with how it's going.
Point being, if I tried to torrent the whole thing, it probably would take 6 months, and would likely get me booted from my ISP and/or sued. I would much rather buy a set of hard drives with the contents already loaded. Or tapes, as userbinator suggests.
(And as for the hypothetical "average person" you keep citing, I don't see anyone meeting that description around here.)
> I would much rather buy a set of hard drives with the contents already loaded. Or tapes, as userbinator suggests.
And my point is that this is an absurd suggestion. I shouldn't have to explain why a shadow library shouldn't be selling (tens of) thousands of dollars worth of hard drives containing pirated content. Beyond that, and what I was getting at earlier, is that maintaining a 1PB storage array at home isn't exactly easy, or cheap.
I shouldn't have to explain why a shadow library shouldn't be selling (tens of) thousands of dollars worth of hard drives containing pirated content.
Depends on what their goal is. I shouldn't have to explain why a "library" that's operating illegally in virtually every jurisdiction, with few or no complete mirrors, is vulnerable to being shut down by a small number of governmental or judicial entities.
If I were running the archive, not being a single point of interdiction would be high on my list of priorities. Especially when any number of people are indeed willing and able to keep 1 PB+ of content in circulation, samizdat-style. I would work to find these people, put them in touch with each other, and help them.
Beyond that, and what I was getting at earlier, is that maintaining a 1PB storage array at home isn't exactly easy, or cheap.
Not everything that's worth doing is easy or cheap, or otherwise suited to "average people." Again, I don't know where you're coming from here. What's your interest in the subject, exactly?
> It doesn't seem reasonable to me to suggest that an average person would spend $10,000+
You're right, and I was not trying to suggest that. I was merely disagreeing with "You are never going to" because I know there are people who are reading this who can and maybe will.
22 TB drives are around $230 on ebay, so if you used 15 of them in raidz2, that'd be around $3500 (so maybe a little over $4k with the rest of the server), which is around the cost of a new mirrorless camera and a decent lens, so certainly within the realm of a hobbyist. You probably couldn't get away with downloading 250 TB in any reasonable timeframe with most US ISPs (or at least Comcast) though. That'd be over 2.5 months of 300 Mb/s non-stop. Even copying it from a friend using 2.5 Gbit/s Ethernet would take over a week.
It's possible to do lossless compression with LLMs, basically using the LLM as a predictor and then storing differences when the LLM would have predicted incorrectly. The incredible Fabrice Bellard actually implemented this idea: https://bellard.org/ts_zip/
Use a universal function approximator to approximate the universe, seek Erf(x)>threshold, interrogate universe for fresh data, retrain new universal approximator, ... loop previous ... , universe in a bottle.
You can do that sort of thing with a toy universe -- in fact Stephen Wolfram has a number of ongoing projects along broadly similar lines -- but you can't do it in our physical universe. Among other reasons, the universe is to all appearances infinite and simultaneously very complex, therefore it is incompressible and cannot be described by anything smaller than itself, nor can it be encapsulated in any encoding. You can make statistical statements about it -- with, e.g., Ramsey Theory -- but you can never capture its totality in a way that would enable its use in computation. For another thing, toy model universes tend to be straightforwardly deterministic, which is not clearly the case with our physical universe. (It is likely deterministic in ways that are not straightforward from our frame of reference.)
The problem with the later is reliability, or rather it's efficient but unreliable. I'd rather overdo my offline storage and figure out some way to script/code my way into searching it in a convenient way.
People can store this at their house, keep it offline. Just to have these seeds of knowledge everywhere.
...I suppose LLM's trained on this data, essentially their model weights and tokenization are a much more efficient way of storing and condensing this 80TB archive?