CAs should have been a thing of past by now. We should learn from the truly distrusted architecture of Tor onion services. There is no central authority in onion services, the site owner has full control over the site. All the traffic is always encrypted and you don't have to trust anyone for it
You have to trust whoever you got an Onion link from, and yourself to not fall for a similar-looking one, since it is the trusted private key.
It's a web-of-trust and/or TOFU model if you look at it closely. These have different tradeoffs from the PKI, but don't somehow magically solve the hard problems of trusted key discovery.
Sure but the web is already TOFU even with CAs because the only thing being asserted is that your connected to someone who (probably) controls the domain.
The client is perfectly able to verify that when connecting without a central authority by querying a well-known DNS entry. Literally do what the CA does to check but JIT.
This does leave you vulnerable to a malicious DNS server but this isn't an impossible hurdle without re-inventing CAs. With major companies rolling out DoH all you care about is that your DNS server isn't lying to you. With nothing other than dnsmasq you can be your own trusted authority no 3rd party required.
The web PKI is not TOFU in any way: Neither do most browsers offer a convenient way of persistently trusting a non-PKI-chaining server certificate across multiple site visits, nor do they offer a way to not trust a PKI-chaining certificate automatically.
The essence of TOFU is that each party initiating a connection individually makes a trust decision, and these decisions are not delegated/federated out. PKI does delegate that decision to CAs, and them using an automated process does not make the entire system TOFU.
Yes, clients could be doing all kinds of different things such as DANE and DNSSEC, SSH-like TOFU etc., but they aren't, and the purpose of a system is what it does (or, in this case, doesn't).
Web PKI is not TOFU in the specific instance where you have an a priori trusted url you know about. But I, and others, argue that this isn't actually that strong of a guarantee in practice. I'm just trusting that this URL is the real life entity I think it is. The only thing you get is that you're connected to someone who has control of the domain. And it's pretty clear that with ACME and DNS challenges we don't need a huge centralized system to do this much weaker thing, you just need any DNS server you trust, it can even be yours.
> The only thing you get is that you're connected to someone who has control of the domain.
Yes, that's the entire scope of the web PKI, and with the exception of EV certificates it never was anything else.
> it's pretty clear that with ACME and DNS challenges we don't need a huge centralized system to do this much weaker thing
Agreed – what we are doing today is primarily a result of the historical evolution of the system.
"Why don't we just trust the DNS/domain registry system outright if we effectively defer most trust decisions to it anyway" is a valid question to ask, with some good counterpoints (one being that the PKI + CT make every single compromise globally visible, while DANE does not, at least not without further extensions).
My objection is purely on terminology: Neither the current web PKI nor any hypothetical DANE-based future would be TOFU. Both delegate trust to some more or less centralized entity.
Yes, but non-Onion links are slightly more memorable.
That's not to say that domain typo attacks aren't a real problem, but memorizing an Onion link is entirely impossible. Domains exploiting typos or using registered/trademarked business names can also often be seized through legal means.
TOR did not in fact magically solve all trust, you have to (potentially blindly) trust that the URL you've been given or you've found is who it says it is. It's not uncommon for scammers to change URLs on directories from real businesses or successful scams to their scam and there's no way to detect this if it's your first time.
Outside of some European TLDs, DNSSEC is pretty much unused. Amazon's cloud DNS service only recently started supporting it and various companies trying to turn it on ran into bugs in Amazon's implementation and got painful downtime. Hell, there are even incompetent TLDs that have DNSSEC broken entirely with no plan for fixing it any time soon.
Another problem with DNSSEC is that the root is controlled by the United States. If we start relying on DNSSEC, America gains the power to knock out entire TLDs by breaking the signature configuration. Recent credible threats of invading friendly countries should make even America's allies fearful for extending digital infrastructure in a way that gives them any more power.
The main limitation is the incredibly opaque and brittle nature of putting keys in DNS.
We've spent a decade and a half slowly making the Web PKI more agile and more transparent by reducing key lifetimes, expanding automation support, and integrating certificate transparency.
Note that DNSSEC is a PKI, but it's fantastically better than a WebPKI because a) you get a single root CA, b) you can run your own private root CA (by running your own `.`), c) if clients did QName miniminzation then the CAs wouldn't easily know when it's interesting to try to MITM you. Oh, and DNS has name constraints naturally built-in while PKIX only has them as an extension that no one implements.
The only real downsides are that DNSSEC doesn't have CT yet (that'd be nice), this adds latency, and larger DNS messages can be annoying.
The single root CA makes it fantastically worse, not better. DNSSEC will never get CT, because no entity in the world has the leverage to make that happen. The whole point of CT is that no WebPKI entity can opt out of it.
They can't be MITMing people left and right without getting caught. Maybe getting caught is not a problem, but still. And if you use query name minimization[0] then it gets harder for the root CA and any intermediates but the last one to decide whether to MITM you. And you can run your own root for your network.
[0] QName minimization means if if you're asking for foo.bar.baz.example. you'll ask . for example. then you'll ask example. for baz.example. and so on, detecting all the zone cuts yourself. As opposed to sending the full foo.bar.baz.example. query to . then example. and so on. If you minimize the query then . doesn't get to know anything other than the TLD you're interested in, which is not much of a clue as to whether an evil . should MITM you. Now because most domainnames of interest have only one or two intermediate zones (a TLD or a ccTLD and one below that), and because those intermediates are also run by parties similar to the one that runs the root, you might still fear MITMing.
But you can still use a combination of WebPKI and DANE, in which case the evil DNSSEC CAs would have to collaborate with some evil WebPKI CA.
The Tor service model is equivalent to if every site used a self-signed certificate, which doesn't scale.
The more feasible CA-free architecture is to have the browser operator perform domain validation and counter-sign every sites key, but that has other downsides and is arguably even less distributed.
The Tor system does scale, as Tor itself proves. Tor just lacks domain names all together and reuses public keys for site identification instead.
Is the tor node you're accessing the real Facebook or just a phishing page intercepting your credentials? Better check if the 60 character domain name matches the one your friend told you about!
I don't think putting any more power in browser vendors is the right move. I'd rather see a DNSSEC overhaul to make DANE work.
This is nonsense. The introduction problem can't just go away, and trust meshes, PKIs, and other schemes are all not panaceas.
Why on Earth would I trust some "site owner" (who are they? how do I authenticate them?) to operate an "onion service" securely and without abusing me? Do you not have a circular reasoning problem here?
> All the traffic is always encrypted and you don't have to trust anyone for it
Sure you do! You have to trust the other end, but since this is onion routing you have to trust many ends.