Do you need to verify your age to perform a Google search?
I think "age verification" is just another "think of the children" ploy to force all websites to check their users' government IDs (starting with sites run by people whose politics are different from those of the government that's enacting the ploy).
I personally don't know if you can be exposed to information not suited to underage kids in chatgpt (which was the reasoning of the regulator), and in general am not a huge fan of putting rules in place because it's the internet...you can always work around blocks, but at least in google you have safesearch which hides some content before the kid becomes too smart to find it anyway.
> children can not enter a library and get Adult books
Is that true? Can a 13 year old child in Italy not walk into their local town library and pick a novel off the shelf and start reading all sorts of violent and explicit narratives? Not to mention all the medical textbooks, and books containing images of artistic works depicting undressed humans.
The first step in solving the trust problem is solving the identity problem. At the very least, once you've got cryptographic identities for entities involved in your supply chain, you can use a TOFU policy and check whenever an identity changes.
Simple operations like rotating a key shouldn't trigger any security warnings, as long as they new key is signed by the old one, and even adding new people to a team should happen seamlessly if (a majority of) the existing team members approve that new identity being added.
Of course it doesn't solve key compromise, or someone selling their keys to someone else, but with long-lived (even pseudonymous) identities, it becomes possible to reason about the trust level of packages just based on how long an identity has been used without being compromised.
No system is perfect, and there's still a long way to go, but the existing systems make the remaining problems more tractable, and already increase the cost for attackers, which should reduce attacks.
> The first step in solving the trust problem is solving the identity problem
I disagree entirely. Knowing that the random "leftpad" library you pulled it was in fact authored by "John Brown, 46 years old, from Milwaukee" does absolutely nothing for your software security.
The only way to audit your dependencies is to actually have someone you trust (e.g. works for you) go and audit your dependencies. The entire system is built on a broken premise.
I'm glad you agree that knowing someone's name, age, and address doesn't prove their trustworthiness, because I don't want trust decisions to be dependent on threats of state-backed violence or mob vigilantism.
It is possible to build up trust in an identity based on how long that identity has been used, and the "transitivity of trust" principle. So you wouldn't trust someone because "John sounds like a trustworthy name", and instead you'd look at how long the author's key had been associated with the library, and whether their key had previously been endorsed on other people's projects (for example having their PRs reviewed and accepted).
Admittedly this introduces a new danger that the social graphs start to become very dangerous honeypots of metadata, especially if we start letting employers vouch for their employees, but the ultimate goal here should be to use something like Verifiable Credentials with zero knowledge proofs, which will allow very strong probabilistic arguments to be made about whether an author (and all the code reviewers) have suddenly gone rogue and decided to burn their hard-earned reputations.
> I'm glad you agree that knowing someone's name, age, and address doesn't prove their trustworthiness
My point is that NOTHING about their "identity" provides trustworthiness, unless you actually know that person and you're contracting them in some way.
> build up trust in an identity based on how long that identity has been used
Why would that be true? Times and times again, we have seen popular packages take a wrong turn. An "identity" is just a key with some untrustable name on it, which can be sold or mishandled just as easily as your NPM or GitHub password.
If your entire security still relies on "this rando didn't do me wrong in the past, they're probably fine" or "they have a lot of GitHub stars", why introduce key management? What does it really get you?
I think https://keyoxide.org provides some kind of middle ground for verifying identity here. The identity there is not meant to be real life names but rather a collection of all social profiles bi-directionally linked together with OpenPGP signatures.
This again verifies identities and in no way software. What's the point?
If you decide to trust "the Python Foundation", what does this key do for you if you're already downloading binaries from python.org? And if you don't, how much does the fact that they have a key help you? Anyone can get a key.
Hackers can compromise python.org and sign stuff with a key advertised there. But the site is just one point. It's much harder to hack python.org and also their GitHub and Twitter account (and DNS and dozens of other supported services).
Keyoxide makes the signing key links on multiple sites thus raising a bar for accepting fake key. It's not a silver bullet obviously. Just makes the attack harder to pull and is machine readable (instead of making humans check the keys).
I totally disagree. If John Brown is a US citizen, works for a major tech company, etc. I feel more comfortable than if it's some anime avatar, location unknown, etc. Risk is a gradient and security at enterprise scale is a huge challenge. This helps move in the right direction. It would be better (of course) to review every line of every package, but what’s the timeline on a typical org achieving that?
Isn't this just like the situation in 2014 where academics were arguing that 'gain-of-function' experiments were dangerous[0], but big government labs and pharma companies were too excited about the research and treatments they could produce?
> DNSSEC, ... still does not work in most TLD and registers.
I'd be interested to know where you get your data from. By my count, there are 142 ccTLDs that support it and 106 that don't, out of 248 ccTLDs.[0]
That's already more than half, but if you include the gTLDs then the number of TLDs signed in the DNS root goes up to 92% according to the best data I can find.[1]
Trusting the country that operates the ccTLD of your website is a much better situation than having to trust all the countries that have CAs operate in them.
A malicious CA in one country can issue a fraudulent certificate for a site in another country, whereas the people operating .ru can't affect the records for example.us so the blast radius is limited by design.
Moreover, no one is required to use a ccTLD, and there are hundreds of gTLDs to choose from, or you could even run one yourself if necessary.
> A malicious CA in one country can issue a fraudulent certificate for a site in another country, whereas the people operating .ru can't affect the records for example.us so the blast radius is limited by design.
Sure and they'll be quickly mistrusted. You can't really revoke DNNSEC trust of an ccTLD operator.
> Moreover, no one is required to use a ccTLD, and there are hundreds of gTLDs to choose from, or you could even run one yourself if necessary.
> Sure and they'll be quickly mistrusted. You can't really revoke DNNSEC trust of an ccTLD operator.
But you don't have to, because the blast radius is so much smaller, and the incentives are aligned better. The reason why CAs require such extreme punishment for misbehaviour is that one bad CA can break the trust for every site on the web.
If a country decided to invalidate the security of (predominantly) its own citizens' websites then that wouldn't harm anyone who used any of the other ccTLDs in the world (not to mention the hundreds of gTLDs).
Also, I think you are over-estimating the ease with which a CA can be "quickly mistrusted". What is the record for how quickly a CA has been taken out of browsers' certificate stores, measured from the time of their first misissuance?
And I would argue that revoking CA trust to Let's Encrypt / IdenTrust would be much more disruptive than revoking a single ccTLD operator, since that would mean breaking most sites on the web. So DNSSEC is actually better in terms of the "too big to fail" problem.
> This is bypassing a dangerous design, at best.
But that's my point; DNSSEC lets you bypass the danger of a rogue issuer, by swapping to an alternate domain in the worst case, whereas with CAs you have to hope that the rogue issuer doesn't decide to target you, and wait for the bureaucratic and software update processes to remove that CA from all your users' browsers.
There are definitely limitations to the DNSSEC system as currently deployed, just as there were with the web PKI system before browsers started to patch all the holes in that, but I don't know why my position on this technical question is so controversial. Nevertheless, I really appreciate you taking the time to offer intelligent counter-arguments in your comment, thank you.
> But you don't have to, because the blast radius is so much smaller, and the incentives are aligned better.
Entire countries best case is a small blast radius? A small CA going rogue would have a much smaller one, when we're talking about best case. Worst case is massive either way (say LetsEncrypt and .com). People also buy a lot of domains ignoring the fact that they're ccTLDs. The mere implication that people should choose their domains considering this fact is terrible.
> The reason why CAs require such extreme punishment for misbehaviour is that one bad CA can break the trust for every site on the web.
They can, but it'll be discovered really quick, especially with CAA violations. This can't be said about DNSSEC, any key compromise and abuse is difficult if not impossible to detect. Imagine that but with DANE, indefinite MITM, scary.
> DNSSEC lets you bypass the danger of a rogue issuer, by swapping to an alternate domain in the worst case, whereas with CAs you have to hope that the rogue issuer doesn't decide to target you
That's an insane bypass though. "Just cut your arm off, then it won't hurt." Change your email, figure out how to patch millions of devices out in the wild, so many problems.
A rogue issuer is much less hassle short- and long-term to deal with. Most browsers ship CRLite or similar and can revoke the root quickly. You can resume operation with a new CA rather fast.
DNSSEC is a nice complement to WebPKI and vice versa, but for our all sake, it can't be the only source of trust.
Do you think that the state shouldn't be allowed to kill unwanted Down syndrome babies after they are born because "that's eugenics"?
What about Down syndrome adults, who are reliant on state benefits?
In case it's not clear, I don't support the killing of disabled people at any age, but I understand that different people approach these questions from a different set of assumptions.
My point is that adding scare quotes around "that's eugenics" doesn't stop it from being an accurate description, so your question is not the gotcha you might think it is.
I don't think the state should be in the business of killing anyone.
But a woman aborting a Downs baby is not the state killing anyone, and most pro choice people would say the woman isn't killing anyone either, since the fetus isn't a person in their view.
> If something else with the right properties comes along then we'll adopt it
Have you looked at DID-SIOP?[0] It's based on the "Self-Issued OpenID Provider" extension[1] to OpenID Connect, to make it easier for existing OIDC relying parties to support those identities.
Do you need to verify your age to perform a Google search?
I think "age verification" is just another "think of the children" ploy to force all websites to check their users' government IDs (starting with sites run by people whose politics are different from those of the government that's enacting the ploy).