When President Obama stated in December[0] that we will deliver a "proportional response" to Russian hacking at the "time and place of our own choosing", it seemed that most of the country was proud, almost gleeful at the thought that we would be striking back. I for one was mortified.
We should not be escalating cyberwar, even if we do have proof of who attacked us. People are going to die. We can see from this article that hacking can cause serious real-world problems.
When we strike back, does Russia then strike back again? What does it look like after four or five volleys? Will entire power grids be down for days or weeks? Will the stock market crash?
It's time for the American people to demand that the N.S.A. become a defensive organization, not an offensive one. And it's time for us to demand peace in general. Cyberwar is war.
> When we strike back, does Russia then strike back again? What does it look like after four or five volleys? Will entire power grids be down for days or weeks? Will the stock market crash?
And at what point, after "cyber-damaging" some piece of critical infrastructure (and/or harming/killing people), does the the other side run out of exploits and just launch actual missiles instead?
This is an under-appreciated point. We've forgotten all the old hard Cold War lessons about escalation.
What we are seeing now is a huge expansion of deniable and proxy warfare. Everyone is still (I hope) clear that sending an actual tank division across the Polish/Ukraine border would be met with nukes. So the question is, what is the largest most damaging attack that can be carried out without reprisal, and how do the participants find this out?
And of course, the ultimate in deniable attacks is one carried out by the enemy's own forces. Did cyberwarfare crash a US destroyer into a cargo ship? (Almost certainly not, but maybe next time). SWAT-ing is already established as a tactic; could suitably forged communications or "fake news" disseminated via the President lead to the SWAT-ing of a country? There are some suggestions that this applies to the Qatar situation.
And Russia reportedly even has a deadmans-switch that can be triggered by seismic activity on Russian soil and rapid changes in air pressure. I'd presume that such a system is not connected to the internet.
Nuclear capabilities will probably not be affected much by cyber attacks over the internet. There are other vectors, but they obviously require a much more serious investment in espionage, spycraft, etc.
The primary way for the NSA to be a defensive organization would be for it to very publicly take a lead in closing up the holes they find on a structural level.
Whether the NSA hoards zero-day exploits or not isn't the big issue since someone will be doing that. The issue is they should be sounding the alarm on whatever broad class of system vulnerabilities they find.
They should be evangelizing against remotely updateable hard drive firmware, against insecure IoT devices and similar things.
I think the world-class individuals who made the NSA into a competent organization left after the end of the Cold War. It looks to have been run by wingnuts ever since; Keith Alexander's "Star Trek" room ( https://www.theguardian.com/commentisfree/2013/sep/15/nsa-mi... ) comes to mind.
I mention that first one especially because Pas-The-Hash was a major reason the most recent ransomware outbreak was able to get onto already-patched machines.
> Note: The IAD.Gov website uses TLS 1.2, supported by a Department of Defense (DoD) PKI certificate, to ensure confidentiality and integrity for all users. IAD.Gov website users will need to have the current DoD Root and Intermediate Certificate Authorities (CA) loaded into their browsers to avoid receiving untrusted website notifications.
I mean it still doesn't make sense ... Where it says "for all users" it should say "for those who have this certificate installed" because that's what the warning is about. And "to avoid receiving untrusted website notifications" should read "for a more secure connection" because, you know, what's the goal here, really? :)
I'm not a coder but from listening around here. What if a government agency took the lead to develop a safer open source operating system. With the knowledge they have regarding attack vectors. they would seem to have elevated insight n what to avoid.
Oh what a world we live in where cyberwarfare could result in death. I would have never thought, as a kid, that life (and death) would end up this "real".
The hacker Karl Koch [1] thought being responsible for the disaster at Chernobyl [2] back in 1986 causing 2M deaths in the last 30 years. I think we will never know whether he was right or not.
I'm confused what the connection is with Chernobyl.
But 2M deaths in the last 30 years is not likely correct - this interesting WHO report on the matter suggests the final toll from the radiation would be more like "up to 4k":
> The main causes of death in the Chernobyl-affected region are the same as those nationwide — cardiovascular diseases, injuries and poisonings — rather than any radiation-related illnesses
>> The main causes of death in the Chernobyl-affected region...
The Chernobyl-affected region includes big parts of Western Europe. Back then in 1986 my godfather was cleaning trains coming from Russia through Ukraine (but not directly from the Chernobyl region) and was contaminated. He lost his teeth and hair. We lived ca. 800 km away from Chernobyl.
On the other hand thanks the river Dnepr the Chernobyl region is connected to the Ukraine ground water system. The astimation of 2M deaths caused by long time effects of contamination comes from a friend from Kiew. She never drunk tap water.
While not affecting humans directly, an indication of the far reach of Chernobyl is that sheep farmers in parts of Norway still have to medicate the herd.
It's such an extraordinary claim that, without evidence, it's reasonable enough to dismiss. The causes of Chernobyl have been pretty exhaustively thrashed out, and it all seems pretty well explained without any hacking intrusion.
If every claim like this has to be met with a shrug and "I guess we'll never know the truth" then it becomes very hard to ever know anything, because there's always some nutter willing to claim something that's totally at odds with all the available evidence.
The cause of the Chernobyl disaster are extremely well researched and documented; the explosion was caused by a planned test that went terribly wrong. Nothing I have read indicates anything about a hack, and everything can be explained by the actions of the people physically present.
> The hacker Karl Koch [1] thought being responsible for the disaster at Chernobyl [2] back in 1986 causing 2M deaths in the last 30 years.
I don't think that's right, the German wikipedia article says:
> Im April 1986 kam es zur Katastrophe von Tschernobyl. Karl Koch, zu diesem Zeitpunkt schon lange schwer drogenabhängig und in einem oft zweifelhaften geistigen Zustand, sah dies als unmittelbare Folge eines seiner Hacks an, da er kurz vorher in den Rechner eines Atomkraftwerks eingedrungen war.
Unless my understanding of German is flawed, what I'm reading is that it was Koch's drug-fueled paranoia that made him believe he was responsible for the Chernobyl disaster because he had recently hacked into a computer in a nuclear power station. It doesn't say whether this computer even was in Chernobyl, maybe he didn't know exactly where the computer on the other end was, or maybe it was more paranoia. He was, after all, in a pretty bad state near the end of his "career".
It's such a fascinating and sad story, this guy. I've watched the film "23 nichts ist wie es scheint" (initially because of the RAW/Discordia/Illuminatus Trilogy references) a long time ago, but I don't remember anything about Chernobyl in that film (but it was a long time ago).
There was another film, a US movie IIRC, that tells his story from the side of an FBI or CIA (or some agency) agent chasing him. I forget the title, anyone know?
If organization A searches for exploits for five years in a given piece of software, and organization B does the same, what percentage of the exploits they find will be found by both rather than just one?
If hackers/governments tend to find the same exploits as each other then it makes more sense to take a defensive strategy since you can protect against a large portion of the weapons your enemies develop. But if there's not much overlap in which exploits are found then an offensive strategy may make more sense.
In the nuclear arms race there was basically one type of attack and one set of mitigations. In cyberwarfare there are potentially endless types of attacks, each with its own set of mitigations. This changes the calculus.
The likelihood of one's weapons being stolen and used against you, your allies, or humanity in general is also a huge consideration.
I don't want the NSA to become a defensive organization. I like that Iran's nuclear program is being delayed. That's not a defensive action.
You don't seem to be proposing that the NSA become defensive, either. If Russia attacks us and we don't respond, that's not even defensive. That's dismantling.
A defensive response would be to inform major vendors of our own infrastructure (i.e. Microsoft, Cisco) of the gaping holes in their systems, instead of leaving them open for the world to attack.
Right. It's a false dichotomy that the NSA doesn't do that and needs to disarm itself. There's a happy middle ground where people don't attack us because they're worried about our retaliation and we also improve our own defenses.
"Cyberweapons" should be treated like bioweapons: there should be a framework of international law to decommission them, because they are simply too prone to spreading on their own and too dangerous to civilians. The US does not have a defensive anthrax capability (I believe), because it's too much of a liability.
What is a cyber weapon? Knowledge of a vulnerability? Exploit code? How do you propose regulating it? Much of this has been tried before and ended up hurting rather than helping.
Cyberweapons created by the USA were used to shut down the entire national hospital system of the UK, one of their biggest allies. Good job USA!
Iran is educated enough to make nuclear weapons, but you want their program shut down because you think they're not responsible enough to be trusted with nuclear weapons.
The US is educated enough to make cyber-weapons, but I want their program shut down because I think they're not responsible enough to be trusted with cyber-weapons.
Maybe the UK should go on the offensive, wipe a few NSA datacenters, for the long term safety of the British people?
You are applying logic to politics, if that worked we wouldn't have problems.
We demand Iran disarm to shame them on the world stage as violators of non-proliferation treaties, and we only do this because they are our "enemy" if they weren't we would care. It cronyism and nepotism just on the scales of nations, and I don't think that can change for a while.
Defense may be the only game worth playing, but how will that work? Unlike the real military where civilians simply don't own the hardware, in computer security they do.
NSA isn't a hardware or software vendor, and the corporations that are don't have much of a profit motive to heavily invest in security. They aren't actually liable for problems unlike say a car manufacturer that releases a faulty product, which leaves what exactly... reputation that takes a hit? But every vendor has bugs and security issues and the market isn't really punishing anyone.
Is the future effectively an enormous government subsidy to profitable corporations (i.e. NSA and other US government agencies basically become extensions of corporate America's QA department)? Is the future heavy regulations to create the proper financial incentives and/or penalties so corporations start seriously spending on security?
It's easy to say "the government should do something!!" but what exactly will that look like?
Regulate operating systems. Fund programs and research to work out how to create operating systems for our infrastructure that contain less zero days. Ensure we're the ones that find the zero days first.
The reason we're vulnerable is because we're unwilling to pay the cost of finding the exploits but people in developing nations ARE because they work for "less".
Right now our economies and systems reward those that fly by their pants and don't care for security. That is the problem. The free buffet of infinite growth from technology startups is the very thing that also gives us this pain and we need to learn to eat less.
This would take an extraordinary departure from our current politics. Government intruding on software would (rightly) cause cries from the most stalwart Free Software advocates and from proprietary software companies.
Can you imagine the outcry if a new Linux fork had to seek government approval in order to post their distribution?
Can you expect Google or Oracle to fail to lobby the government to make sure they don't have to get each major revision certified?
When we're trying to protect ourselves from the vulnerabilities hoarded by our government, I think that asking them to regulate the OS might just be a step backward.
Regulation, especially for software powering critical systems. We'd have to have a good definition for what critical means, probably with different levels that escalate the security requirements.
> I'm not sure why we collectively decided to lack courage
In this case courage is stupidity. Why waste time knocking out their power when we can spend that time making our power more secure.
You're like a web master with a downed site due to a DDOS going:
> Well I've DDOSed the attacker's site so its okay
no its not, you've achieved nothing. Get with the program; this is a defence world not an offence one.
In situations where Person A is attacking Person B and making it sound like Person A's solution is the only one, Person A is usually mistaken.
My mind is open. Convince me. Why is it a good idea to remove the threat of retaliation from our toolkit? It seems like one of the most persuasive reasons not to attack us.
Obviously attribution becomes far more important with cyberattacks, but that's a tangential point.
You're acting like the power grid going offline is equivalent to a thermonuclear explosion. Yes, it will suck, but it's temporary. And afterwards, those exploit vectors will be patched.
> Obviously attribution becomes far more important with cyberattacks, but that's a tangential point.
No its not, its the whole point. The point of the age of information is that it is a departure from the age of blood and steel. The tactics of blood and steel that you are supporting have no place in this future and are counter-productive.
The point of the age of information is that power is no longer solely in the hands of nations. Therefore treating it as a case of "stomping on the bad people" means your attack spread increases from all the nations in the world to all the people in the world. Added to that the evidence of those attacks can and will be forged.
You're chasing shadows and its not worth it. Just build a better shield, that's all that matters.
Yes. This. Old government people think of everything in terms of "weapons" and "wars" and they still think the word "cyber" is cool. We need to promote, like, a radical new school of thought about software: Creators of a system should be fully responsible for whatever damage the system causes. A bug is a bug, whether it's exploited by an outside actor or it happens during normal execution. Developers should be responsible for bugs. I like the "defective lock" analogy from https://news.ycombinator.com/item?id=14663474
The problem is such traditional calculus doesn't apply anymore in cyberwar.
For one thing, your big "guns" weaken your own infrastructure, since you use about the same technology that your adversary uses too. Secondly, your guns can be copied without many problems and then suddenly they are directed at you and your weakened infrastructure.
I’ve never liked the term ‘cyberweapon’. It is subtly misleading and gives the non-technical masses misconceptions about how exploits actually work. Cyberweapon implies that exploits are created by governments and let loose on the world, when in reality exploits are existing flaws that were simply discovered by governments or individuals. Exploits are like a serious manufacturing defect in a lock that was only discovered after the fact. I think this misconception has real implications. If you compare exploits to Nuclear weapons, the public reaction will be we need more secrecy and more weapons to protect us from hostile nations. If however you changed the analogy to something like a defective lock, it become obvious that what we really need is openness so exploits can be fixed.
While I understand where you're coming from, and I agree with you to some extent, it's not really that simple.
While the majority of exploits we currently see in the wild are things I think the "defective lock" analogy works well for, there's a subset of attacks that would be equivalent to cutting the lock with bolt cutters.
In those cases, there are specially crafted tools that aren't exploiting a defective lock, they're destroying the basic premise that let the lock work.
I'd say that RowHammer fits that description pretty aptly. It's a cyberweapon. It's not an exploit.
It's so much of a weapon that (as far as I know, someone please correct me if I'm out of date!) there's still no known mitigation strategy that completely solves the problem. We have lots of partial mitigations, but nothing surefire yet.
So... it's both. We certainly have lots of defective locks, but we also have some very nasty tools that exploit some fundamental premises of our tech in clearly malicious ways, and were absolutely designed and implemented to do exactly that.
In this case we're talking about the recent NSA exploits leaked by Shadow Brokers and then utilized by WannaCry and the Petya attack. Both of those had a "weapon" (the ransomware itself) that spread by way of an exploit found by NSA (a defect in Windows that people failed to patch).
In this case, I feel that blaming NSA for the exploit and calling those "cyberweapons" is wrong. The entity who put the ransomware on top of them and deployed them built a weapon.
Chemical and biologic weapons also fit your bill: we didn't create or invent anthrax or smallpox.
However, a government deliberately hiding results from their medical research, or misleading/exploiting their pharmacy industry would be in a dark ethical corner.
Not trying to claim whataboutism, but I think there's an elephant in the room. The end result of the NSA saying "ok, as of today we've completely disarmed our cyberweapon stockpile and released patches for all vulnerabilities to the appropriate software companies" wouldn't be the end of cyberattacks. It would just be someone else doing them. I don't know what the real solution is. Maybe there is none.
The point is that there would be fewer cyber attacks, both because the NSA itself would no longer be adding to the number of hacks and because the NSA would use their sizeable budget to discover and disclose vulnerabilities, presumably making all of us safer.
Their budget is sizeable but less than the annual profits of Google, Microsoft, Apple, etc. And NSA pays for tons of stuff that those corporations don't have to deal with like having thousands of linguists.
Where is the responsibility of corporations in all of this? They have a cash pile that dwarfs the entire intel budget and ought to be the FIRST entities that invest in fixing their OWN products, right?
> Where is the responsibility of corporations in all of this?
Somewhere around here :-
> "It is difficult to get a man to understand something, when his salary depends upon his not understanding it!"
Sorry shareholders, you'll be getting a tiny dividend this year because we are spending a huge part of our 'profit' on backfixing all the shit we let slide said no CEO ever.
> because the NSA would use their sizeable budget to discover and disclose vulnerabilities
Right now, the process is:
1. Find a way to survey the target environment, learn what software and hardware they are running
2. Acquire vulnerabilities to exploit the known target software/hardware, either from a third-party, a contractor, or manual reverse engineering of those specific components.
3. Adapt mission specific payloads to use on the target.
4. Use for as long as needed.
5. Disclose to vendor after the purpose is served.
If the agency mission changed to be purely about discovery and reporting, that might not help the general public. If this were to happen, it seems like the focus would likely be on protection of only software/hardware in classified systems, as that is already their defensive mission. Instead of having things like EternalBlue patched we would have them open and available for a different party to discover. That seems worrisome to me, but I am really curious to know if you might have a different take on how this would work.
This is a strawman. No one knowledgeable is saying it would create 100% security, just that it would be a net increase in the security of our infrastructure.
The sentiment I've seen from people is "if we can just stop the NSA from doing these, our problems will go away".
What I'm saying is not to do nothing, but rather we need to have a plan for continued attacks (like how spam filters came into being) in addition to trying to get any and all vulns fixed.
Somebody could have done that right now as well, but nobody did make them so far (or used them in any significant way that people know of).
Instead of (ab)using somebody else's mistakes to your own advantage (and possibly have it backfire) you could also tell that person about their mistakes so the whole world could benefit and there would be 1 issue less in the world to worry about.
People have, in the past. The problem is that we will never remove all 0days until we stop releasing software. That's not to say we shouldn't try (to Quarrelsome's point), but eventually the stockpile today will be obsoleted by the stockpile of tomorrow. And if nation states didn't have a pile, the seedy side of the internet would, alongside trading botnets, credit card lists, etc. My point being that while noble efforts, it won't go away and we need to figure out how to deal with it.
Here's one reason such a stockpile could be used for good: say a previously unknown vuln is attacking "our" (whomever that is for you) infrastructure. The command and control has been traced back to a cluster that's vulnerable to one of the weapons in your stockpile. Now you can potentially disable it, stop it spreading, tell all of them to run an updated version of the code that essentially does nothing, etc. For all I know, this could have happened already.
> White House officials have deflected many questions, and responded to others by arguing that the focus should be on the attackers themselves, not the manufacturer of their weapons.
Am i to understand that if somebody would manage to steal nuclear warheads and launch them we don't hold the people who failed to protect them responsible?
I agree that's a bit of a stretch for a comparison. I think it would be closer to saying that the NSA found a key to a company's system, didn't inform the company, and then got the key stolen from them by criminals which then brought down the system.
Obviously that's extremely simplified, but all parties here are at fault. And all we're seeing is a bunch of finger pointing, with not enough defensive and preventative action being taken.
With the initial vector being some widely used Ukranian tax software, and the network vector as psexec/wmic mimikatz harvested credentials, the actual usage of 'NSA cyberweapons' was just a backup.
I suspect this attack would have had a similar number of victims without EtBl/DoPu and EtRo.
The existence of 'nation state' offensive tools has little baring on exploitability for poorly configured enterprise network, when most victims were exploited by open source offensive tools, even when patched.
You are correct regarding the most recent ransomware not actually needing EternalBlue and just adding it if needed (As proof of concept code is widely available for it).
I think concerns include things like WannaCry too though, which did indeed rely primarily on use of EternalBlue.
Cyberwar hasn't necessarily led to the massive loss of life associated with nuclear or chemical weapons. Until that happens (or like with nuclear weapons, we can culturally show how much of a zero-sum game it is), ordinary people won't have an incentive to take action. Technically minded people may carry capital, but we're vastly outnumbered by the dwindling working class politically.
The only way a nuclear system could be compromised is if there was some idiot surfing the internet on it, or if someone intricately familiar with the systems and network tailored an exploit to target it. If we have teams of people targeting specific systems like this, and just hovering over the execution button, then I'd say this is a huge problem. Nothing in this article really described the nature of the threat these weapons pose.
If history has told us anything, it's that this won't be fixed until we wake up one day and the majority of the computers in the world are bricked. Then the government will act. Not before.
Five Eyes are meeting this week to develop a backdoor plan...There should definitely be a big backlash against it, especially in light of recent events.
Backdoors in US infrastructure = invitation to Russia and China to go right through it.
Prediction: as this becomes more frequent, the only thing that will substantively change is the frequency at which newspapers report that fear is raised.
Closed source vs open source. Why are such exploits not so evident on mac, linux, bsd etc?
Is it only because microsoft dominates the desktop market? Edward snowden clearly showed MS relation to the NSA. I do not believe at all such exploits are ${Discovered} by the NSA. But the exploits are backdoors that MS provides. That is why they are now blaming the NSA and putting on a marketing campain to show how much they care about everyone ${Security}. So what if this is the case? To me its pretty obvious who created this mess.
For all his quirks, RMS has made points in the past that I'm kicking myself for not taking seriously. I just wish that the FSF wasn't so tied up in minutiae -- it turns people off.
It's a sad fact that the face of proprietary tech is that of a handsome young generic "disruptor", while Open Source / Free Software / Libre / GNU//blah is less-cheerful Steve Wozniak.
This does NOT bode well for the future of humanity. It seems that war is only war when people on your side are dying. Once everything is sufficiently automated it will be possible to wage war without risking any of your humans. I don't want to know where that leads
> Once everything is sufficiently automated it will be possible to wage war without risking any of your humans.
i don't see how that could possibly work. just because people won't die directly from a weapon anymore doesn't mean they aren't negatively impacted in this hypothetical scenario. i'm thinking of attacks crippling key infrastructure which could lead to large scale supply shortages
How does one protect themselves, or loved ones? I feel this may come across as a silly question, but if escalation continues, I believe I won't be the only one asking this question.
1. Why do they not simply take the weapons of a network? Maybe store it on some physical media, or a computer not networked unless it's time to take a Kim down a notch?
2. Are these "weapons" really that dastardly? Most of these common ransom-ware and viruses are easily avoided, and only succeed because of naive users. Backdoors aren't a weapon, they're there on purpose. Sniffing, spying, and logging can potentially cause some chaos. But are these really some kind of Zero-cool level hyrda's that they can sink oil-tankers with?
So you think this stockpile is mostly viruses/trojans that would target random users and hope it spreads to important systems, or hope there's important systems manned by naive people? These kind of exploits are everywhere, and I'd say the NSA is hardly the biggest threat in that arena.
It's more likely a list of exploits across different devices that give you various levels of access to do as you wish with. Some are probably nothing to worry about, some might be something that gives you the ability to get into the machine and encrypt whole sections.
When President Obama stated in December[0] that we will deliver a "proportional response" to Russian hacking at the "time and place of our own choosing", it seemed that most of the country was proud, almost gleeful at the thought that we would be striking back. I for one was mortified.
We should not be escalating cyberwar, even if we do have proof of who attacked us. People are going to die. We can see from this article that hacking can cause serious real-world problems.
When we strike back, does Russia then strike back again? What does it look like after four or five volleys? Will entire power grids be down for days or weeks? Will the stock market crash?
It's time for the American people to demand that the N.S.A. become a defensive organization, not an offensive one. And it's time for us to demand peace in general. Cyberwar is war.
[0] http://www.cnn.com/2016/12/15/politics/obama-russia-hacking-...