Privacy advocates always just say screw the feds but come on... if police work becomes effectively impossible, what's the point of having detectives? Is having absolute privacy worth an irreversible increase in unsolved murders/human trafficking/etc.?
Why is it so hard to create a cryptographic solution that would allow E2E that works 99.9% of the time, is resistant to malicious actors, but is breakable with a warrant? Have people really even tried?
For example, consider the following system:
1. E2E decryption keys are encrypted with vendor's device-specific asynchronous key
2. Keys are then stored in write-only non-volatile storage
3. Only way to read key is with expensive hardware that is difficult for civilians to obtain/replicate (scanning electron microscope, etc.)
4. Thus, only way to decrypt key is by a) physically obtaining device, b) using expensive hardware to extract encrypted key(s), c) serving encrypted key along with legal warrant to vendor who would then comply with the law (or not, if it is unlawful)
Wouldn't such a system allow privacy yet also be resistant to attacks?
It hasn't. That's a lie. Police/etc have far more resources available to them than they've had at any point in history. The trail of digital exhaust we leave behind is only increasing, which makes OPSEC much harder[1]. End to end encryption doesn't prevent metadata analysis[2], network usage/timing analysis (e.g. [3]), or investigation of the massive amount of data that isn't end to end encrypted..
Also keyboard loggers, anything else that can intercept data at a point of use before/after encryption, and the good old rubber hose. Encryption is not a significant obstacle to specific surveillance. It's only an obstacle to mass surveillance. To put it another way, the regular police will be fine but the Secret State Police have a problem.
Then why do you always read stories about the police having a hard time getting into terrorist's phones and going to extreme lengths to get into the phone?
Shouldn't the metadata be more than enough for them?
Like the San Bernadino shooter, for example[1] (jump to decryption section)
Once upon a time, people did not carry devices around that logged their every move, and police work was still regularly conducted. Currently, there does not exist any technology to read minds. If there were, do you think that the government should be allowed to surveil our thoughts? Do you, umvi, draw the line at encryption, or do you feel there should be no limit to government access to our personal lives?
Just to be totally clear, you mean that so long as the government has a warrant, they should be allowed access into our innermost thoughts and feelings?
In the case of non-existent-and-will-never-exist mind reading tech, then sure, I think the government should be able to examine your thoughts if they have a warrant.
> if police work becomes effectively impossible, what's the point of having detectives?
Police work has become lazy and they'll have to go back to finding evidence they can access
For more detail on this, look at the musings of the 98th Congress during the 1980s who really speculated on what privacy would mean once everyone's files were digital. The period of digital files without encryption made it convenient for the governments to circumvent privacy protections. This convenience is being removed. Warrant or no warrant.
> Why is it so hard to create a cryptographic solution that would allow E2E that works 99.9% of the time, is resistant to malicious actors, but is breakable with a warrant? Have people really even tried?
Yup. Tried very hard. It's not possible. This isn't a restriction of our current tech. It's a fundamental part of the mathematics of cryptography. Asking for it is kind of like trying to legislate pi having a different value.
Citation needed. It would be like saying a paper money system would be impossible/ineffective because paper money manufacturing techniques can theoretically be reverse engineered.
In theory that is true, but in practice paper money works quite well.
They are reverse engineered. Paper money has a limited downside. In the US, we know there aren't bills over $100 in general circulation, so you can, with effort, print up a few million dollars. The money supply of paper-ish money (M1) is around 3.4 trillion dollars. From the point of view of the principal actor US mint this is annoying and needs to be dealt with, but it's not a danger to the system unless it is allowed to run unchecked. The power imbalance involved is what makes it work.
For crypto, the principal actor is whoever is encrypting. The downside of an activist in a repressive regime having their crypto broken is basically unlimited for them. Or the population that uses ecommerce as principal actor. The downside for them is enormous and the upside for someone who suborns a system incredible. Since we don't know tye power and incentive imbalance between the actors, crypto has to assume the worst.
What you think is "expensive" today will not be tomorrow, and there's little that would be "difficult to obtain/replicate" -- SEMs are not hard to get access to, for instance.
There is no security if you leave a backdoor for the government or anyone else.
That's why my proposed system doesn't rely on the difficulty of reading write-only chips alone... You need both:
1. Physical access to the device
2. Physical access to the vendor's key storage (this could be extremely difficult if not impossible if the company were responsible - consider an air gapped storage vault)
I still think my system could work if implemented by, say, Apple. You have only shown weaknesses of individual parts, not weaknesses of the entire system.
You, like everyone else who has these ideas every few years when this government overreach bubbles up to the surface again, have made the mistake of believing that first the government is responsible enough to manage this kind of program -- they aren't and never will be, and second that engineering can stop engineering.
If you can make it, someone can break it, and will. And companies are not responsible, that's not how this works.
Also, a "weakness of the individual part" is a break, and the entire system collapses. You're essentially arguing for security by obscurity.
Who says the government has to manage it? Let Apple manage it so their reputation is on the line. Government just specifies that E2E is breakable with warrant; Apple can implement it how they want to preserve privacy.
> And companies are not responsible, that's not how this works.
Why not? If a company implements a government requirement poorly they are absolutely responsible for the ramifications, just like they are with existing regulations (like if I made a handheld gaming system that jams radios on accident, I'm responsible for not following FCC regulations properly).
> Also, a "weakness of the individual part" is a break
That's ridiculous. That's like saying you can't build a bridge out of pure rebar and you can't build a bridge out of pure concrete because individually they are too weak, so therefore you can't build a bridge out of rebar-reinforced concrete.
You can argue about "responsibility" all day long, until my credit card information gets stolen, and then you find out it's a meaningless concept with a corporation. Do you have the money to sue Apple? I don't. Or worse, someone gets killed because of their data being stolen. Maybe you live in a privileged world where that doesn't happen, some of the rest the rest of the world doesn't.
If you build a bridge out of concrete with weak rebar, guess what? Bridge falls down, people die. If you build a bridge with shit concrete and great rebar, guess what? Bridge falls down, people die. You've made nearly the perfect analogy for me.
Privacy from the government is something most people want. If the government wants my data, I'm not going to build a system that makes it easier for them to get it.
And finally, you can't ban math. I don't give a fuck who says what, I won't put backdoors in my code, and if someone else does I'll stop using their code and I'll build my own.
Well it's not like you can roll your hardware, software, firmware, etc. You implicitly trust Apple or Google or whoever to keep you secure at some level, so yes, responsibility and reputation matter. Apple pushes back against illegal warrants to keep prying officials from getting into your iPhone.
> Privacy from the government is something most people want. If the government wants my data, I'm not going to build a system that makes it easier for them to get it.
You won't have a choice; Apple will build the system and it will be transparent to you and they will only let the government into your phone if they have a warrant.
> And finally, you can't ban math.
No, but you can make it harder for people to use said math.
That's like saying "you can't ban physics! If you try to ban my automatic weapons I'll just 3d print a part that makes my semi-auto gun full auto!" Well, yes. You can do that, physics won't stop you. Your average person, however, won't do that and everyone is better off as a result.
> I don't give a fuck who says what, I won't put backdoors in my code, and if someone else does I'll stop using their code and I'll build my own.
Well you'll spend the rest of your life living like RMS then because you can't control the whole stack of a phone - it takes millions of man hours to develop and maintain.
Ah so it's an inevitability argument now. You're all over the place.
I don't care if you believe that it's inevitable. I don't, and won't live my life that way. Slavery was inevitable once. The holocaust was inevitable once. People standing up against it stops what weak minds believe inevitable.
And gun parts are machined, not 3d printed for the most part. The laws against it are directly contrary to our Constitution and a lot of people dedicate a lot of time to working against those laws. But even that isn't math -- you simply can't ban math. The files that describe how to machine those parts -- those are readily available all over the world, even where they're banned...
Also, I trust Apple a lot more than I trust the other tech companies, because they've been willing to stand up for the consumer -- other companies don't.
I believe it is possible to design a secure system that complies with warrants while otherwise retaining user privacy; you don't. That's okay.
Before computers existed police could bust down your door with a warrant and search through the files/mail. I don't see why that should change just because we store files/mail digitally now.
It's not a matter of belief. There is fact, which is that a broken system is broken, and there is fiction, which is the belief that a broken system can be only broken when requested to be broken, presumably by somebody with the right credentials.
No one with any credibility in the field of math or CS believes this fiction. In addition, I encrypt my data specifically so that no one can get to it. That includes the government. I don't care what the government thinks they are entitled to with regard to my data.
All systems are broken, then. Even the one-time pad, though mathematically perfect in theory, is flawed in execution as proven by the NSA.
My proposed system in the original post is just as secure as any current scheme.
> No one with any credibility in the field of math or CS believes this fiction. In addition, I encrypt my data specifically so that no one can get to it.
Huh? It's easy to get an individual's data. Don't fool yourself. You are not low hanging fruit in broad attacks, but if a highly skilled group wanted your data, they could easily get it. I would just look at what keyboard you use and swap it with an identical one that has an invisible keylogger in it. Or I would get to know you and your coworkers and do a precise spear phish.
Bam, instantly compromised, no decryption needed.
It's only if you commit a terrorist attack and then commit suicide that I have a hard time getting your data because I can no longer spearphish or keylog you.
A one time pad is not broken by design. Your scheme is.
You can’t get my keyboard, and I don’t answer phishing attacks. This isn't about that -- in fact you're making the argument your broken key management system isn't needed.
No one is immune from spear phishing. No one. A competent spear phisher will compromise one of your coworkers first (maybe even your manager), then target you posing as them. Click a (legit looking) link and you are pwned.
And yeah, I don't know who you are obviously. But if I did and you were my target, I could pwn you easily. Pwning individuals is easy with keyloggers.
Also, my scheme isn't broken by design, you just labelled it as such with no evidence. You never said why it's broken by design. Because it's not.
You’re making my point for me. Since you believe so strongly in social engineering, there’s no need for backdooring crypto algorithms!
You have articulated clearly that you require your proposed system to have a backdoor for the authorities to use to read the traffic. That is the definition of broken by design.
You are picking and choosing which parts of my comments you read and are blind to the rest. Backdoors are not "broken by design", that's just another slogan you are chanting.
Not just murder but child porn, human trafficking, terrorism, etc.
My muslim coworker was recently arrested by the FBI for trying to carry out a truck attack (apparently he was radicalized by online ISIS videos) - it might be nice to get a warrant for his phone and see who else he was talking to or conspiring with.
Or for the sake of privacy we could just not have any of that information which is what you are suggesting?
The bar is: "this man is a known terrorist and has committed a domestic terrorist attack; therefore we have grounds to search his private records to search for co-conspirators, accomplices, etc."
I'm sure they'll get a warrant for his phone and have a look.
The problem is that we have already extant and widely circulated encryption algorithms that (we believe) are secure against decryption. It's math. You can't "uninvent and destroy" all the math and preclude its use by adversaries determined to use it.
True, but you can make it more difficult for normal people to use. You can't uninvent fully automatic firearms, but you can make them difficult for normal people to obtain such that they are effectively never used (even by determined terrorists plotting mass shootings)
You are comparing harmless messages to dangerous weapon. It would be ridiculous if one could buy a rifle with a range of hundreds of meters but couldn't use an algorithm to encrypt a message.
Who says messages are harmless? I disagree vehemently and contend that information can be indeed more harmful than firearms in the hands of the right people.
A single message can mobilize millions of people to take to the streets and riot. A single firearm can't do that.
A message sent to millions of people will be either public or will be leaked. So it doesn't make sense to ban encryption for such messages. Better ban riots. Also, there is something wrong in the country if millions of people are ready to riot.
>If you want less murders, maybe it makes sense to stricten regulation on firearms?
Why haven't they thought of that in Chicago, Stockton, Newark, Detroit, Memphis, Baltimore!? Oh wait, no.
Stolen guns that actually make up the majority of crimes committed with, and straw purchases that "your plan" doesn't address. You know, reality of the situation, not what you heard on the news.
I suppose there is also the fact it doesn't address that the murder rate in London is the same as NYC, despite an extreme lack of availability of guns surrounding London. Which strongly implies that guns aren't the only way murders are committed.
I was ready to agree with you the problem was just "regulations" but then I thought about it.
Yes. New York does have additional restrictions on carrying. A case is going to the Supreme Court right now, and is expected to find those unconstitutional, so much so that the state of NY recently tried desperately and failed to not let that happen by voluntarily changing their law to get ahead of a ruling. [0]
If you are arguing that NY and London shouldn’t have the same murder rate, well, I don’t see how that plays for three reasons.
1. Despite the laws, criminals break them. Guns aren’t allowed to be carried in NY, but people do it anyhow. Just as heroin and coke aren’t legal ANYWHERE but we have lots of crime with them.
2. The murders in London are mostly knife as the weapon. New York has knife laws too, but not as strong as London iirc. I know my everyday carry knives are all illegal in both places.
3. Very much on topic, Illinois, Delaware, New Jersey, and etc have some of the strongest antigun laws in the country and cities with highest crime and guns as murder weapons. Here is the thing... we know EXACTLY, down to the neighborhoods where our crime is, and our rural areas with all the guns are on-par with rural Europe murder rates 2-4 per 100,000.
Has there been an increase in unsolved murders/human trafficking? Is there evidence that traditional police methods are no longer effective? Let's not infringe upon personal liberty for the sake of an extremely hypothetical worldview.
Also you are proposing a solution where access to private information is gated by economics. I'd invite you to think about the connotations and ramifications of that model in our current society a little more.
> You left out b) vendor has the keys stolen or leaked by a disgruntled employee and now the encryption is useless.
e) If the keys are stolen, issue new keys to all devices
The leaked keys are only good for physically compromised devices in the hands of people with access to the scanning electron microscopes, which I daresay is an extremely small attack surface.
There is only a small window after the leak in which a device can be stolen, powered down, and compromised.
On the other hand, you could mandate that such keys aren't allowed to be stored in databases (physical access only)
You don't always know that keys have been stolen. And an electron scanning microscope is hard to get now, but what about state-sponsored actors spending half a decade developing a pocket-sized tool? The whole point of E2E is that all of these scenarios are literally not possible.
Well, periodically reissue keys then regardless of if you think they've been compromised. Or don't store the private key in a database, store on physical media in a vault that is airgapped and hard to access. Make the read-only-ability of the storage chip more difficult and onerous with each generation like paper currency security.
My point it that you could make it so difficult to break E2E for even the most elite hackers that the only realistic way to do it is with a warrant.
Not if you're sponsored by a hostile actor with functionally limitless resources. E2E isn't just about stopping legitimate law enforcement from conducting investigations.
The more realistic scenario is already possible today, and doesn't need to involve so much technical mumbo-jumbo: at step #2, instead of stealing your phone, they kidnap you, and torture you until you give up your password. Done, and no need for steps 3-5.
(And I suspect, for a sufficiently-motivated state-level actor, that actually falls under "easy", or at most "medium".)
> We've seen this happen with TLS certificate authorities
Have we? I'm going to assume that you mean CAs in the Web PKI and not just "My friend Bob runs TLS and this has happened to the CA he was running on his Windows 10 laptop".
The last CA where we had a really grave problem was DigiNotar, in 2011. It seem _very_ unlikely that the problem at DigiNotar was full key compromise, instead bad guys appear to have penetrated the issuance infrastructure. This means they were able to (and did) issue themselves arbitrary certificates, but it did not give them the actual keys as you've said "happens frequently".
Since then we've seen a variety of unacceptable behaviour, including issuing backdated certificates to conceal the (also problematic) choice to continue doing something that was no longer allowed in new certificates, and issuing "test" certificates which would have been trusted by real client software even though their contents were known to be false. All unacceptable, and all having consequences (for example Symantec is no longer a CA) but all far short of "vendor has the keys stolen or leaked by a disgruntled employee".
Well they can already do that if they want to, so this system doesn't change that. Responsible vendors wouldn't do that though (and if they do, just don't buy their products)
They can already do that if their crypto system is designed around derivative keys that they control, but you can design a system that does not work that way.
Apple, for instance, encrypts a lot of your data using your AppleID password, which they obviously don't know, so they have no way to decrypt that data.
I'm not an expert, but my understanding is that encryption that is E2E in 99.9% of cases is, definitionally, _not_ E2E, as it is no longer required to be at either end point.
With my proposed system though, there is no way for the chip to "transmit" anything. It's a passive, write-only storage chip that requires physical optical examination with expensive hardware to extract information from it.
Why is it so hard to create a cryptographic solution that would allow E2E that works 99.9% of the time, is resistant to malicious actors, but is breakable with a warrant? Have people really even tried?
For example, consider the following system:
1. E2E decryption keys are encrypted with vendor's device-specific asynchronous key
2. Keys are then stored in write-only non-volatile storage
3. Only way to read key is with expensive hardware that is difficult for civilians to obtain/replicate (scanning electron microscope, etc.)
4. Thus, only way to decrypt key is by a) physically obtaining device, b) using expensive hardware to extract encrypted key(s), c) serving encrypted key along with legal warrant to vendor who would then comply with the law (or not, if it is unlawful)
Wouldn't such a system allow privacy yet also be resistant to attacks?