Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So what we actually need is

- a decentralized way to store these libraries

- by a source with established trust (so it can't be misused for tracking)

JS/CSS library blockchain?



Unless you have very few libraries and always force everyone to the latest version, it's still quite practical to abuse this for tracking. For example, there are sites running Dojo on at least 86 versions [1], all of which are pretty uncommon. If one site causes you to load one of these versions, and another site checks which one you have in cache, that's >6 bits of information. Combine this with all the other libraries and versions, and you can easily get enough bits to uniquely identify someone. It's even worse if one site can load multiple versions of the same library: that turns 86 versions into 86 bits.

[1] 1.13.0, 1.12.3, 1.12.2, 1.12.1, 1.11.5, 1.11.4, 1.11.3, 1.11.2, 1.11.1, 1.10.9, 1.10.8, 1.10.7, 1.10.6, 1.10.5, 1.10.4, 1.10.3, 1.10.2, 1.10.1, 1.10.0, 1.9.11, 1.9.10, 1.9.9, 1.9.8, 1.9.7, 1.9.6, 1.9.5, 1.9.4, 1.9.3, 1.9.2, 1.9.1, 1.9.0, 1.8.14, 1.8.13, 1.8.12, 1.8.11, 1.8.10, 1.8.9, 1.8.8, 1.8.7, 1.8.6, 1.8.5, 1.8.4, 1.8.3, 1.8.2, 1.8.1, 1.8.0, 1.7.12, 1.7.11, 1.7.10, 1.7.9, 1.7.8, 1.7.7, 1.7.6, 1.7.5, 1.7.4, 1.7.3, 1.7.2, 1.7.1, 1.7.0, 1.6.5, 1.6.4, 1.6.3, 1.6.2, 1.6.1, 1.6.0, 1.5.6, 1.5.5, 1.5.4, 1.5.3, 1.5.2, 1.5.1, 1.5.0, 1.4.8, 1.4.7, 1.4.6, 1.4.5, 1.4.4, 1.4.3, 1.4.1, 1.4.0, 1.3.2, 1.3.1, 1.3.0, 1.2.3, 1.2.0, 1.1.1


Yes beside some ideas about e.g. ipfs + emulating network weather on all accesses (instead of just cached ones) the real annoyance is that there is no sane standardized Js standard library.

If we could we should make following best practice:

- Only use react and similar if you write a webapp, do not use such tools for websites. If your website is so complex that you need it you are doing something wrong.

- Have a js standard library which provides all the common tooling for the remaining non-webapp js use case.

- Make it have one version each year (or half year), browsers will preload it when they ship updates and keep the last 10 or so versions around.

- Have a small standardized JS snippets which detects old browsers which are not evergreen (like IE) and loads a polyfill.

Sure there are some requirements to get there. E.g. making it reasonable easy to have proper complex layouts in a reactive fashion without much JS or insane complex CSS. (Which we can do by now due to css grid, yay).


If you're relying on browser updates, then why not just work on getting whatever JS improvements you want into browsers directly?


- Back&Forward compatibility by Versioning and shipping multiple versions with the same browser

- Easier prototyping and experimental usage of pre-releases

- Backward compatibility with older browsers on the first view versions at least


From a quick search, there's apparently already a way to have the browser verify the cryptographic hash of a resource loaded from an external source: Subresource Integrity (SRI). [0]

Can anyone comment on whether it's practical, and whether it could help here?

[0] https://developer.mozilla.org/en-US/docs/Web/Security/Subres...


SRI now solves the problem of ensuring a third-party resource doesn't unexpectedly change its contents, but it can't address the security issues of being able to time how long a request takes to tell if you already have it in your cache.

AIUI it is less practical than you'd like, for several reasons that work together to greatly mitigate its impact: 1. The remote content most vulnerable to hostile change is also remote content already being loaded precisely because it changes; ad scripts, etc. Protecting your jQuery load is nice, and if someone did compromise a CDN's jQuery it would be a big issue, but it's also not what has been happening in the field. If the content has to change you can't SRI it. 2. If you have a piece of remote content that you rigidly want to never change, it's much safer to copy it to a location you control, and it's almost as easy as SRI. 3. If a subresource fails SRI there's no fallback other than simply not loading it, so it has a very graceless failure mode. This combines with #2 to make it even more important to put it in a locally-controlled area. Once local, SRI is more-or-less redundant to what SSL already gives you.

Basically, it's one of those things that looks kinda cool at first and makes you think maybe you should SRI everything, but the real use cases turn out to be much smaller than that.


SRI solves a real problem if you want to use a CDN without opening yourself up to compromise if the CDN is compromised. But yes, there are lots of other problems that it doesn't solve.


> ad scripts

Should not exists IMHO ;=)

But yes, SRI is just for the case the CDN gets compromised.


Define "help". With the new caching rules, there is just no way two sites with different TLDs can share asset caches, so it wouldn't help in the sense that double downloads could be avoided.


You still can de-duplicate storage in the cache, just not the download.


True, but the "avoid multiple downloads" part is most likely more significant than the "avoid multiple copies on disk" part. Storage is comparatively plentiful while network bandwidth is often quite scarce, and 3 seconds longer loading time are going to be more visible than a 10 MiB larger disk usage.


I think that the developers of it have argued there's a security concern with using it to avoid looking up libraries. I know it's come up here on HN before if you can find it.


A blockchain solution is probably over the top for this and does likely have major problems with regard to all kinds of regulations.

Just content based addressing (e.g. ipfs) is good enough to be actually useful and allows local hosting and sharing caches at the same time.

Still neither of this would fix the privacy problem. Through with something like ipfs emulating network delays on access could work, but would be VERY hard to get right and make immune to statistical timing analysis.

It's also one of the view ways how to get some (imperfect) degree of censorship resistance without running in direct conflict with laws to e.g. hinder access the child pornography. Note that this is just imperfect protection working for countries which do not officially have censorship in their law and have no effectivish national firewall. I.e. it wouldn't work for China, it also wouldn't work if the political situation in some western countries get worse. But it does work for "non-official" censorship enacted based on not-so-legal pressure and harassment. Or corperate censorship enacted by companies supposedly on their own will.


Like this? https://www.localcdn.org/

(Without the fancy / bs bingo technology)


So we need something like Fedora organization to make trusted distribution of javascript libraries for the web. As bonus, these libraries can be precompiled to native code ahead of time.


Or just ship the libraries with the browsers.

(Obviously also not ideal)


The browser is a good candidate.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: