Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Leaked Emails Show Google Expected Military Drone AI Work to Grow Exponentially (theintercept.com)
535 points by raleighm on June 1, 2018 | hide | past | favorite | 496 comments


I don't see anything wrong for a company of a particular country helping their military. GE, Siemens, Mitsubishi and all tech giants of that era did that with pride during WWII. Then why can't Google, Facebook for USA?

Perhaps the illusion of globalization is making us perceive Google, Facebook and other international giants as global companies. Although, geniuses from around the world have significant contributions in the success of these technology companies, after all, they are loyal only to profit and the USA. When a war breaks loose, these data monsters are one of the best weapons against the opponents.


I have this queasy feeling that the U.S. is killing people who aren't actually taking up arms against it.

If doing so shaves a few percent off the error rate that's saving innocent lives, but it's also becoming an essential part of a process that kills innocent people?

I'd prefer standing against the slaughter.


It is fact, not a feeling. How many of the roughly 1 million people dead in the last 15 years of the "War of Terror" ever directly attacked the US or US citizens?


That’s the problem with preemptive strikes against individuals who aren’t part of a nation that declared war against you.. who’s t say they wouldn’t have attacked you? The metrics are inpossible, and the government is free to redefine people who were in the wrong place at the wrong time as probably guilty anyways.


Killing someone who, if left alive, might kill you is arguably a worst justification than those used by actual terrorists. It throws away the very pretext of correcting a wrong, perceived or otherwise.

As software developers we should see the problem here with "premature optimization". It's an incredibly inefficient use of resources with limited success in addressing unplanned situations. Sounds a lot like the war in terror.


Have you ever seen an engineer waste hours and days on a premature optimization? I have.

Have you seen the same thing sometimes turn later into remarkable prescience, correctly anticipating some future problem and solving it before it happens? I've seen that too.

Government and policy are rarely about certainty. They're generally and risks and probabilities, anticipated costs and expected values. That's the domain of statement and spies, and not for people who like clear certainties.


Ever seen Psycho-Pass? It explores some of these notions in greater depth.


ever directly attacked the US or US citizens?

Why do you draw the line at "directly attacked"? We have laws that make it a crime to conspire to murder. I don't think it's unreasonable to go after people who are conspiring to attack the US or US citizens.


If they're within range of our borders, and people, sure.


Why does the border matter? If somebody across the border is getting ready to fire a cruise middle at me, should I just let it happen?


The point is that very few of these people had any intention, let alone capability, to commit a terrorist act against the US. Most of them were civilians or young men who joined their countries' militaries to defend against an invading force.


1 million dead in 15 years seems a lot higher than any other number I have seen. Do you have a citation for this?


The 1 million number I've seen floating around for a while was just for the Iraq war, not even including the other wars/conflicts we are involved in. https://www.reuters.com/article/us-iraq-deaths-survey/iraq-c...


You should change your reference point, assuming drone warfare is coming and is not banned in some way who would you rather be the world leader in that space? Alibaba and China or Google and the US?


Uh, that question sounds like your typical US Elections; No good answers and there's an implication we're supposed to pick the "least bad" one.

Thank goodness the world is just slightly too big to be manipulated into a two-party deadlock.


You can stand against opiod addiction and the lives it has taken, working to abolish them completely. Or you can work to reduce the harm it does.


Extending this argument, would you be OK if they helped other countries as well? Or just your country? I am trying to see if you are arguing from a patriotic perspective or saying that there is nothing wrong with military collaboration in general? I don't see why a company should only collaborate preferentially with one country. Either they shouldn't collaborate with any military as a part of some policy, which I don't see as enforceable. Or they should be free to collaborate with any country while not sharing any confidential data with each other. That gets us to corporations truly owning the world.


> Would you be OK if they helped other countries as well?

That would depend on how closely that country's interests and values are aligned with my own. This is what arms export regulations are intended to address.


A "country's interests" are not that of most of their population. Okay, gray and fuzzy area: When most citizens of European countries where enthusiastic when WWI started, does that mean that war was in their interest? So let's add the condition that you are not under the influence of propaganda and group-think and ha time to assess and feel the effects of policies. It's still fuzzy though: If you, directly or indirectly, depend on income streams that flow through the military–industrial complex, is war in your interest if you yourself are not negatively impacted?


> I don't see why a company should only collaborate preferentially with one country.

Because if they do, then they avoid the negative outcomes you describe.


> ... after all, they are loyal only to profit and the USA.

You know what, most employees of Google or Facebook would vehemently disagree, including American citizens.

In fact, say anything like that in Google's internal groups and I'm sure some lawyerfolk will descend down on you, hissing "Would you please stop talking about what you have zero ideas about?" (These days it might even involve HR. After all, saying to half of your coworkers that their work amounts to being a fifth column wouldn't go too well...)


Source please. I've Not seen anything stating most tech employees are anti Maven.

Blind had a poll and it showed the exact opposite, with a strong majority for Maven.


There's a large gap between being OK with your employer having a military project, and believing that the entirety of your multinational corporation is (either morally or legally) bound to one particular nation.

I suspect most of those Googlers who are OK with building a robot for Pentagon will probably be OK with building another for, say, the German military. (Why not? It's double the profit.)


Can you link the poll you mention?


> all tech giants of that era did that with pride during WWII

During WWII the US was freeing the world from oppression; now it's more seen as the oppressor. The side you're on is pretty important (as it was during WWII).

This is even more true in the case of drone operations and so-called "AI" assisted actions, where operators sitting safely in air-conditioned facilities in the California desert can decide to remotely kill people with no risk to themselves.

I very much understand why a normal Google employee would prefer to have no part in that. Google understands it too, since it actively tried to keep the deal secret and minimized its impact when it became known.


> The side you're on is pretty important

So, generally and from what I understand about US drone operations, these strikes target people like ISIS and other Islamic extremists. Following this, in a general sense, the sides are the US and terrorist religious extremists.

The way you frame your point makes it sound like your on one side or the other. If that is the case, I would much rather be on the side of the US than the extremest who murder people for arbitrary reasons.

If that is not the way you meant for it to be framed, please correct me.

> This is even more true in the case of drone operations and so-called "AI" assisted actions, where operators sitting safely in air-conditioned facilities in the California desert can decide to remotely kill people with no risk to themselves.

Why does the risk to the warfighter matter? Would you rather they send infantry to go and assault the target? Why wouldn't you want to minimize the risk to the soldiers in the field by using tech to hit a target rather than men on the ground?


>So, generally and from what I understand about US drone operations, these strikes target people like ISIS and other Islamic extremists.

While that is the official reason/target, the question of who actually gets killed in those drone strikes is a contentious issue:

https://www.theguardian.com/us-news/2014/nov/24/-sp-us-drone...

https://www.washingtontimes.com/news/2015/oct/15/90-of-peopl...


Oh I don't deny that the government royal screws up drones strikes sometimes. It is terrible that that occurs.


It's unlikely I can convince you of anything regarding your two questions, but... if everything is so clear-cut and right and wrong are easy to tell appart, why is this an issue with Google (trying to hide it) and Google employees (protesting / quitting)?


> if everything is so clear-cut and right and wrong are easy to tell appart

I would never in my life assert that these issues, ethically, are simple.

I think that Google trying to hide this information is more connected to their image and culture more so than anything else.

From an outsider looking in, it seems that Google has this hyper progressive culture that thousands of people live and work in. So the company helping the US military get better at killing people with flying robots clashes with the ideals expressed by the company and are a part of its culture.

Is your question relating to my comments on 'sides'? Your original comment is what brought up sides. When you think about this issues, what do you mean by sides?

You just say "the side your on is pretty important". You use WWII to try to bolster this claim, but that doesn't mean anything to me in the context. I trying to understand what you mean in your comment.


IBM also did that for the Germans in WW2, perhaps it's better to stay politically neutral for companies?

"without IBM's machinery, continuing upkeep and service, as well as the supply of punch cards, whether located on-site or off-site, Hitler's camps could have never managed the numbers they did." (This is the opinion of the writer of "IBM and the Holocaust" [0])

I bet Google is not looking forward to going down into history as:

"without Google's AI, continuing upkeep and service, as well as the supply of TPU's, whether located on-site or off-site, the US's drones could have never managed the numbers they did."

[0] https://en.wikipedia.org/wiki/IBM_and_the_Holocaust


GE, Siemens, Mitsubishi, etc. have nowhere near the amount of power that Google has over our lives.

Google is a different beast altogether. Recent events have shown that every aspect of our lives can be controlled by data.

Imagine a situation where Google can supply weapons to support human rights violations and simultaneously keep such information from reaching the masses.

Misinforming just 30% of US Population has brought people like trump to power.


The issue is taking engineers who likely NEVER signed up for military work ("that's not in my job description!") at your company and throwing them at military contracts. If Google wants to pursue this they should either hire experienced government contractors (who have the benefit of existing government clearance) or Alphabet should spin up (whether buy or create / fund) a government contractor focused on this sort of thing. It would likely yield better results given the types of resources Google has available. But don't just expect someone whose never done government work who might not want to to just roll over and do it if you force it upon them.

Disclaimer: I work as a government contractor unrelated to Google but I chose the type of company to work for based on the TYPE of work they do. If I don't wanna build kill drones I don't work for companies known to do weapons or related engineering.


its wrong because we don't want to one day wake up with autonomous drones flying over our head that can kill us. Upload a video to youtube with the wrong ideaology or keyword that triggered some AI to flag your account? Get ready to be fragged as soon as you step outside. I'd like to opt out please


Regardless of whether Google works with the US military on this, or if another company works with the US military on this, or if NO outside company works with the US military on this, or even if the US military doesn't work on it at all, autonomous AI powered weapons are happening anyways. We can't put them back in the box. The prosperity that technology brings us comes with a cost. Weaponizing technology will always be a part of humanity.

The only question for someone in the US should be "would I rather the US (and its allies) develop these weapons, or China develop these weapons, or Russia develop these weapons?".


The US's developing them -- and especially the US's being the first to develop them -- makes it at least slightly more likely that Russia and China will develop them.


My point was that Russia and China (and France, the UK, and many other nations) are going to develop them anyways, regardless of whether the US develops them.

Anyone who likes living in a world where the US has military superiority over Russia and China (and every other nation) should support the US military, Google, and any other US company developing these weapons.


The algorithms that take videos down due to "copyright" is already quite stupid, imagine it misclassifying your video as a terrorist video (or... an anti-Donald Trump, so anti-government...)


I think the reason this is a problem is something you stated: Google brands itself as a global company. I’m an American and while I acknowledge the density of intelligent people that work at Google, the fact that they ostensibly have “global interests” makes me concerned that they could do this work safely. Please do not interpret that as some jingoistic platitude. That’s just...reality. While the “company” may be loyal to the US the people who work on these kinds of projects would need to be highly scrutinized.


Similarly, how would people feel if Google, as a global company, was developing drone tech for other countries?


Better example - how would all these American Googlers feel if Google started selling drone tech to a country that then began drone striking Americans, e.g. at sea?


Euhm. Google, as demonstrated at maker fare, is making drone tech that has plenty of military applications : Google Maps/Earth.

https://www.youtube.com/watch?v=F5jj6gBKGK8


I can 100% guarantee that no one will be able to work on Maven without getting a TS/SCI clearance. The process of getting one entails a high level of scrutiny.


Yes, I’m sure as well. But the “global” nature of the company still exposes even those who have SCI (or higher) clearance to risk.


I kind of agree actually. Humanity has always been at conflict with each other and will be for the foreseeable future.

But for the first time in history we have the tools to conduct intense warfare yet limit collateral damages and minimize cost in both human and material resources.

The engineers who worked on precision strike weapons/smart bombs should be applauded instead of ostracized, in the past we'd just carpet bomb the whole city to get rid of enemies, along with innocent civilians.

Obviously perpetual world peace without the need for weapons will be ideal, but I'm afraid I won't live long enough to see that day, so I'm totally fine with a more pragmatic approach at making war less costly than necessary.

If drones powered by AI can achieve that, I'm all for it.


I have mixed feelings here. On the one hand, you're correct about reduction of collateral damage.

On the other hand, weapons of old were used as part of actual wars. Precise weaponry seems to be making it too easy to kill off people without starting a war - like the US is doing with its drone strikes - which means it'll happen more. Also, I fear that the limit of precision strike weapons is being able to swiftly kill an arbitrary target anywhere, for cheap - which opens the way for ethnic cleansing, killing off political opponents, people harboring thoughts inconvenient to powerful people, etc.

Ultimately, I think precision weapons are better than blunt ones, but their development needs to happen in lock step with development of protection technologies - mostly social technologies, that would ensure those weapons can't be arbitrarily used by random politicians or countries without consequence. Unfortunately, it seems that in the last decades we've been losing social technologies - like the rule of law and trust in it.


If it "will" happen more, then why are we seeing the opposite ? Numbers of deaths due to military conflicts has been systematically declining and drone strikes have been with us for a decade. If loosely defined, you could say they've been with us since the first precision guided missile struck (1968).

And of course there's the real can of worms that most technologies have military applications and can often even be directly weaponized. Drones being an example by themselves, but of course every source of energy is another obvious one. Add most medical discoveries, or at the very least every advance in genetics. Furthermore, say a better hashmap also helps guided munitions work faster and more precisely ...

The problem with this argument is that it is self-contradictory when you analyze it thoroughly. Furthermore, reality seems to confirm that this is contradictory.


Hmm...that's a good point. A "safer" weapon may provide more incentives for wielders to use them and less restraints may be exercised.

I guess everything is a double edged sword after all.


> I fear that the limit of precision strike weapons is being able to swiftly kill an arbitrary target anywhere, for cheap - which opens the way for ethnic cleansing, killing off political opponents, people harboring thoughts inconvenient to powerful people, etc.

This a very good point. I agree with you. To prevent this requires people to be involved in our democracy. As a group, we need to elect people into office that have the proper moral and ethic values that empowers them to hold the military accountable for how they use these weapons. If this is the case, I believe that is would be EXTREMELY hard for the military to misuse these weapons.


> Humanity has always been at conflict with each other and will be for the foreseeable future.

This could be used as an excuse for pretty much anything.

> But for the first time in history we have the tools to conduct intense warfare yet limit collateral damages and minimize cost in both human and material resources.

And yet the invasion of Iraq was a tragedy on a mass scale. Also, by this logic should we supply dictators like Assad (president of Syria) drones and smart bombs next time there is a civil war so they can reduce the collateral damage of oppressing their own people?

If only Al Qaeda had had smart bombs. Then collateral damage would have been minimized /sarcasm /blackhumor

> The engineers who worked on precision strike weapons/smart bombs should be applauded instead of ostracized, in the past we'd just carpet bomb the whole city to get rid of enemies, along with innocent civilians.

Killing the civilians was the point of the bombing. Breaking the morale of the populace is an important strategy in total war (Such as WW2).

Also by this logic should we applaud white collar crooks for innovating to commit robbery without physical violence?


>Also by this logic should we applaud white collar crooks for innovating to commit robbery without physical violence?

All robberies are amoral, but not all warfare are such. Imagine if we could have precision struck Hitler with a laser guided bomb instead of having to carpet bomb Dresden.


> Humanity has always been at conflict with each other and will be for the foreseeable future.

well, it's been in decline for a while now:

https://ourworldindata.org/slides/war-and-violence/


I don't think WW2 and the "wars" of the last 20 years are a fair comparison.

The recent wars are highly controversial and this is clearly recognised and protested by people working in SV.


The outcome of wars determines how protests are seen. Do you think there were no large protests against attacking the Nazis in WWII ? Anti-war people are anti-war. They have as little concept of just wars versus unjust wars as the people they accuse of warmongery.

https://en.wikipedia.org/wiki/Opposition_to_World_War_II

https://www.youtube.com/watch?v=KrhjmT4Zg5I


Thanks for calling this out.

There was fierce opposition to getting involved in all the wars that we now look back on as "just" - WW2 is the best example.


I think if we were currently fighting the 3rd Reich people might be more for it. But currently the USA has a massive military for no purpose, which is often used to conduct evil, and thus some people do not want to aid it.


I'd argue that the purpose is to act as a deterrent against countries that might otherwise think of invading their neighbors or things along those lines, but it's reasonable to think that the cost of such of large military is not worth the benefits.

It's also one of the US's largest employers.


And one of our largest welfare programs. It's pretty easy for an 18 year old with no marketable skills to join the military and end up with some skills and a free college education.


> ... which is often used to conduct evil, ...

I am curious. In your eyes, what evil has the US military done?



What is your opinion of the work that IBM did for the Germans in WW2?


I don't have anything against tech companies helping the military, that's their choice. I wouldn't want to build that stuff, but others can make that choice.

However, AI weapons is something I strongly condemn. They raise many unsolved ethical questions, such as if they kill civilians who committed the war crime? How do you make sure they don't kill civilians?

If Google were revealed to be developing biological weapons or chemical weapons, I suspect you would have a different opinion.


I think it's because of idealism innate in computing. Those young aspiring minds dream about making a positive change in the world, studying day and night, sacrificing pleasure for knowledge and capability, and at some point somebody will show them a video of a person blown into pieces and telling them: "Congratulations! This is the main fruit of your life. Well Done!" - moments later - "Why do you complain? You signed a contract so shut up!"


> When a war breaks loose, these data monsters are one of the best weapons against the opponents.

War and military research throughout history have done many good things for quality of life, even back to the days of Archimedes fearing the Roman invasions, when it is more about defense not offense. This here ARPA/DARPA decentralized network of the internet is one of those good things along with self-driving cars, wifi, even microwave ovens come from radar research...and many others.

But what could possibly go wrong when military/war artificial intelligence is combined with all your data and knows everything about you? We have seen Terminator and heard about Skynet. What if what it knows about you is wrong or spoofed?

The ramifications could be much less than worst case Skynet but you can plainly see the way Youtube automation is even just for copyright, you gotta ask yourself, will it get everything right? Will YOU be deemed an opponent and a 'bad guy'? When the machine does it there is also a level of liability taken away.

Defensive military/war tools are one thing and lots of good from bad throughout history, but when defense is pre-emptive offensive actions like drones, the line gets fuzzy. Are we Archimedes or the Roman soldier that killed him?


> But what could possibly go wrong when military/war artificial intelligence is combined with all your data and knows everything about you?

Someone made a video about it:

https://www.youtube.com/watch?v=TlO2gcs1YvM

Dailymotion (French company) mirror:

https://www.dailymotion.com/video/x6kttmw


Nothing wrong with running their emails or servers but helping it identify targets with AI brings it very close to everyone's nightmare scenario of fully autonomous killer drones.

I don't know the details, probably everything will be verified by a human (for now) and we're maybe talking only structures and patterns instead of persons.

But things may evolve quickly. AI can already drive a car and diagnose some illnesses better than humans can (there was a recent article about AI detecting melanoma).

The step to "decide who to kill better than a human" is awfully close. Personally I'm not against it all if better really means better. It's just crazy that cutesy Google with their maps and emails are now doing this.

It's shocking enough that Intercept would pounce on it. If it was Raytheon nobody would care.

I wonder if they'll update their terms & conditions. "Any selfies may be used to kill you". Al Qaeda better (not) be using Google Photos.


During WWII Siemens owned a factory in Auschwitz and manufactured electrical parts for concentration camps and death camps.


Others doing it doesn't make it morally any better, though others thought like you do... like all the axis powers doing their fair share of deportations and gulags.


They are multinational companies. If they want to move HQ to another country, they can do it any day. Happens all the time.

It is question of ethics and values. Plenty of companies and people were totally fine profiting from holocaust.

US is killing thousands of people in countries we are not in war with with no trial.

In my opinion everyone in the world should have the same rights as us. Otherwise it is just hypocricy and we just live a huge lie.

How would anyone feel in America if somebody was killed without trial by foreign nation?

Also this behaviour long term is completely self defeating. Military wise nukes make everyone equal? Are we planning to keep nukes out of hands of our enemies for centuries or we just planning to kill everyone who is against our policies?


helping their military

Do you mean helping in the sence FB helps us socialize and Google helps us search?

Helping without expectation of reward and incentivizing further business?

Not a pro bono work from the looks of it so profit is expected and good place in future competition with other contractors.

Perhaps I am nitpicking but "help" doesn't belong here.


Maybe Google isn't literally a global company, but I doubt that it wants only US business.


It is not necessarily a question of "right" and "wrong" but what are the consequences of people seeing Google as a weapon as opposed to an information service?

If the whole world begins to see Google as a weapon then they will take defensive measures.


Globalization does make Google a global company. Some of its employees have had friends and relatives in the path of US-operated drone strikes.


Maybe if there be one US AI company would be ok, but Google is a world wide company that has it's tentacles in many things.


The difference is that the wars nowadays are less about ideals and more about profit. Yes muslim extremism is bad and those people are bastards, but the military isn't fighting them as effectively and there is no way to win this war - like the "war" on drugs it's a self-reinforcing, endless cycle kinda thing.

WW2 was different; the nazis had invaded most of Europe, were committing genocide, etc.


Because it doesn't fucking rhyme with "don't be evil".


Google is a multi-national company. Why should it have allegiance to the US? Just because it was founded and has headquarters somewhere doesn't mean that it needs to be loyal to that place. Look at how quickly Banco Sabadell and CaixaBank moved out of Catalonia after the separatist movement started causing ructions. Do you think that staying on Trump's side is in the best interest of a multi-national?

Don't forget, there's another side to building weapons: the better your weapons, the lower your threshold for starting a war. AI weapons are especially dangerous: they permit politicians to start wars without risking citizen's lives, thus reducing the natural incentive in democracies to avoid war. Now put that power in the reins of Trump. Does it sound good?


April 2004: We will machine-read your e-mails so that we can show you banners more efficiently. No funny business, we promise. We also collect all kind of data about you, even offline data but no biggie.

April 2018: We developed all that technology by reading your e-mails and analysing your daily life, now we will make military gear with all the expertise we know about the world and the human population so that political actors can kill more efficiently. No biggie.

PS: We control the information flow, we can start and stop political conflicts by selectively displaying information but we would totally not use it to sell military gear. Oh, we also don't claim to be not evil anymore but no biggie. We also have established political ties and spend tons of money on lobbying but we would definitely not extract wealth by starting political conflicts and selling gear that is designed to contain the specific conflicts to the people in power that we spend a lot of money to keep in touch. No biggie.


> PS: We control the information flow, we can start and stop political conflicts by selectively displaying information [..]

I see that claim thrown around a lot but nothing ever backs that up.


It's mostly a deduction. The hypothesis is that if you can control what information is available you can affect the opinions and feelings of the people who receive that information.

There's a study conducted by Facebook about controlling emotions by means of content curation. If Facebook can do it, Google too can do it: https://www.theguardian.com/technology/2014/jun/29/facebook-...

Google is the dominant search engine and these days they even host the content through AMP and they own the most popular web browser and mobile computers operating system. They have not only the means to reach people and curate the content generated by other people, they can also change the content created by other people(AMP pages) and disguise their actions(since they control the app that displays the content).

Let's travel in time and go back to the WW2 days and mention Hitler as any proper online political discussion would eventually do.

Don't you think that it would be very convenient for Hitler if he had a way to show "how awful Jews are"? Wouldn't he loved it if Germans search results about economical crisis or gas prices would display how "greedy Jewish bankers are destroying the lives of honest German workers" or any other political or cultural Nazi propaganda?

Unless we switch to cryptographically signing all the content we create and learn to distrust any content that's not signed and have a way to verify that the signed content curation is not politically influenced, giants like Google and Facebook would have the power to make/break countries, start/stop lynches or even genocide.

There's a reason why China doesn't allow search results about Tiananmen Square protests of 1989 and all the governments are trying to find a way to control the information flow.


> I don't see anything wrong for a company of a particular country helping their military. GE, Siemens, Mitsubishi and all tech giants of that era did that with pride during WWII. Then why can't Google, Facebook for USA?

Companies that supported holocaust on Poles and Jews did nothing wrong, because they were helping military?


so you want Google to do it with pride in WWIII?


This is not WWII


Like Porsche or Siemens did for the Nazis for example. Nothing wrong there. I wonder how you would feel and what you would say if you were on the other end of these weapons e.g an innocent Afghan civilian.


So, Siemens also built the deep packet inspection system for Iran (in a joint venture together with Nokia) and was accused of complicity in human rights violations (by Amnesty International, among other organisations).

After being called out, both companies posted an apology/explanation on their websites, where they claimed they didn't know that building a deep packet inspection system for Iran would result in human rights violations.

After having been developed for Iran, the system was also sold to Egypt. This is one reason why the Black Mirror-esque tracking/surveillance situation in China should scare everyone: This system now exists. Consider the inefficiency of government IT projects when they need to be built, compared to when they can be bought. I don't know if China would sell it, of course, but the mere existence and that it can now be deployed elsewhere, which is of course still no small task because a lot of it is probably very specific to China, but even then, scary as hell.

But I digress. So you say they helped their military with pride during WWII (what side was Siemens on, btw?)

Is there really nothing wrong with Siemens developing DPI infrastructure for a country that is known for its human rights abuses? (Siemens claiming not knowing what it would be used for is either a bullshit excuse or a dangerous lack of foresight).

So is there also really nothing wrong with Google developing AI for flying military robot plane-missile-things for this other country that is also known for human rights abuses? (just not on its own citizens, usually)

Anyway, my point is, it doesn't matter whether a corporation supports its own military/country when developing technology that comes strong ethical questions (which is the case for military drones, or DPI).

Google should consider their own choice in the light of whether this technology will be used for good things, not just whether the military of the particular country they're developing it for considers it a good thing, because usually the military considers whatever grants them more power to be good things.

Google should also consider that there is no "when a war breaks loose", that's not how the world works today. Besides, the USA claims to be currently "at war" anyway.

So when they develop modern weapons for the US military, they should consider that in the light of what the US military is doing right now and what it has been doing recently, not in the light of them taking the noble high ground in a hypothetical future war situation that conveniently has a clear "good side" and a clear "bad side".

Because before that hypothetical future war will happen those weapons WILL have been used for whatever the US military pleases currently. And that's not always good things.


"I don't see anything wrong"

Maybe you see if, you substitute 'military' for it's historic contributions to mankind: "I don't see anything wrong for a company of a particular country helping «an organization that is specialized in killing and very likely to perform torture, rape and genocide»"


Hell of a comparison. “Why can’t they?,” you ask? I’m not saying they can’t, but for a start, how about the fact that we are no longer fighting a world war?


And do you know what it is that really upholds the peace we all love?


Is it arming Saudis to create famine in Yemen?

Or is it that police can drop literal bombs on Black neighborhoods and few will ever know about it?[1]

Or is it that when I'm in the US, I'm more likely to be murdered by police than to be murdered by anyone while in Japan?

Or is it that Chicago has a murder rate 5x that of Somalia?

Or WMD lies? Gulf of Tonkin lies?

You may love this, but does everyone?

Wolf Blitzer said he does: https://theintercept.com/2016/09/09/wolf-blitzer-is-worried-...

1 - https://www.npr.org/sections/codeswitch/2015/05/18/407665820...


Username checks out. So social media and search are guarantors of world peace? From a non-US perspective, it looks more like SIGINT against allies by treaty.

But hey, it's a private company, no harm, no foul.


What are you talking about? Google is/was not contemplating offering social media and search services as part of Maven. The subthread op asked, why can't a company help its country's military?


What in God’s name are you asking? You seem confused as to the nature of my comment, which was merely pointing out the massive difference in context between WWII and today.


So your contention is that you have to be fighting a world war in order to help your military? (The post you responded to) There's no other reason to help your military? Really?

Edit: this came out a bit snide so let me elaborate. There are plenty of reasons you may want to help your military when there isn't a world war going on. You can do it for patriotism. You can do it for the economics. You can do it because military applications of technology are actually pretty cool and can lead to more commercial inventions. Or, to the point in my original response, you can do it because you understand deep down that the threat of military force is what underpins the overall fairly secure world that we live in. Look at the military maneuvers and carrier group movements every time two countries get nervous around each other. Despite (or maybe because of) evolution were still just a step away from clubbing our neighbor (or being clubbed) for resources. So yes, a strong military helps support the country Google was founded in and thrived in, and helps Google and every other company in the US be successful, and makes possible the salaries we all make. So there are definitely incentives for those companies to want to make sure their military is properly supported. And no, I'm not a military man.


Yeah it's fairly obvious you aren't a military man, thank you for saying it explicitely.

The core of my issue with your points is this belief: >the threat of military force is what underpins the overall fairly secure world that we live in

That hasn't been true since some time after the first gulf war. Asymmetrical warfare (good old guerilla/"terrorism", cyberespionnage..) aren't operational theaters on which carrier groups present determining advantages.

The threat of vitrificating entire valleys hasn't really deterred ideological opponents to the USmil and associated economical or strategic partners.

Basically, you're arguing for a variant of the irresistible movement (strong traditional military capabilities), while ignoring why the metaphorical unmoveable object adapts to resist traditional military pressure.

Ask the soviets how Afghanistan went for them despite their vastly superior military capabilities.


"There's no other reason to help your military?" - I would say that there is indeed no reason to help the military in any way. We would all be a lot better off if the military was disbanded and people would focus on other things than killing eachother. And take it a step further and ban guns/weapons in general, both ownership and production (including for state actors) and we would be one step closer to world peace.


Around here it's often capitalism that is the poster cold for world peace.


I don't see anything wrong for a company of a particular country helping their military

Sure, it’s fine but let’s call a spade a spade. Will Google update their global T&Cs to say “we are an American company serving American national interests and by using our services you agree to your data being used for that purpose”? Of course they won’t!


There ought to be a service that re-writes T&C and privacy policies in the most straightforward manner.


Of course there is! https://tosdr.org/


These are all highly opinionated statements, but:

- If you're American, you should support Google developing technology for the US military.

- If you're human, you should support a foreign policy that only considers military action as a last resort.

- If you're human, you should support early, aggressive, global regulation of the military applications of AI and drones. We made this mistake with nuclear. We leveled Hiroshima, we leveled Nagasaki, we fueled decades of animosity between the Soviets and the West that still plagues the world today. We need to do better next time. The biggest abuser might be the US again but it could just as easily be China.

- If you're human, you should support regulation that forbids Google from using your personal for military applications without your express and informed consent.

Substitute Google for any other tech giant's name.


> ... you should support early, aggressive, global regulation of the military applications of AI and drones.

The history of global regulations on the military application of useful military technologies is not too impressive. The second a shooting war breaks out those rules will be forgotten, giving a huge head-start to anyone shady enough to ignore them wholesale.

"AI" has been used in war since WW2. Strapping an IED onto a quadcopter isn't a shocking idea, and the economics of 1000+ stealth-killer UAVs vs 1 not-that-stealth Navy Boat are pretty undeniable over time...

AIs and Drones are inevitably coming to warfare -- they're here already. Economic unions that make warfare unthinkable may be a better long term bet (thanks, Trump!).

> We leveled Hiroshima, we leveled Nagasaki, we fueled decades of animosity between the Soviets and the West that still plagues the world today

Mmmm that animosity was not created by bombing two Japanese cities, among many, with a new weapon type. That animosity was a freight train that could have kept WWII going another decade, and there was a real problem of how that war might look for our side had a resurgent Russia conquered Japan early in. They took Manchuria just a few days after the second bomb, IIRC, so their threat was far from hypothetical.

Those bombs were sending a clear and distinct message to the Soviets. It worked. I'd rather have animosity than a nuclear WWIII.


> The history of global regulations on the military application of useful military technologies is not too impressive.

I don't think that's true at all. Since WWII we've been quite successful at reducing the use and proliferation of destructive military technologies through treaties. Certainly they don't work 100% of the time and they work in tandem with other forces but they've been a key part of the solution. See https://www.armscontrol.org/treaties


> The history of global regulations on the military application of useful military technologies is not too impressive.

So far we've been able to prevent a lot of use of bio weapons. Many have said they want to do the same with AI and have already started doing that.


> - If you're American, you should support Google developing technology for the US military.

Why? Americans are not humans?


I meant to imply that Americans (or citizens of any nation) have certain interests that not all humans share, but in general, we're a lot closer than we think.


That still assumes that all Americans are American Nationalists, i.e. they prefer the prosperity of people living in America at the expense of others. I don't think that's necessarily true and I definitely don't think they "should" believe that.


Because it is a conflict of interest.

The public has put their trust into Google for 20 years. We supported them. We cheered them on. We have voluntarily given them all our personal data. Our emails. Our instant messages. Our voices. All of it feeds into their massive database for them to mine.

We put our faith into them, in the hopes that they would honor their creed, that they would do no evil.

But gradually, that facade has faded away. Their accumulation of personal data, beyond what is necessary for them to perform their work. And now this. The utilization of their wealth and knowledge, to push forward the state-of-the-art in the weaponization of Artificial Intelligence.

There is nothing that will stop this. The military will find a defense contractor to do their bidding. But Google can voluntarily elect to remove themselves from this sector. Let some other company do it. Let someone else figure it out. The blood of dead innocent civilians will be on their hands. You don't see Raytheon, Boeing, or Northrop crying about it.

By doing the military's bidding, then Google is now far scarier than Microsoft ever was. For those who don't know, when they coined the phrase, Do no evil, they meant it in relation to the Microsoft Borg of the 1980s and 90s. Microsoft now seems like a sheep in comparison to Google.

I think it's time to disconnect Google from our lives. But, it's hopeless. They are everywhere.


> There is nothing that will stop this. The military will find a defense contractor to do their bidding. But Google can voluntarily elect to remove themselves from this sector. Let some other company do it. Let someone else figure it out. The blood of dead innocent civilians will be on their hands. You don't see Raytheon, Boeing, or Northrop crying about it.

I work at Google but opinions are my own.

This is actually the thing I don't understand. Could you or someone please kindly explain it to me?

What is "evil" about working with the military or developing weapons?

I've seen this whole "Google is betraying its 'do not evil' by working with the military" as fact so many times but I've yet to get a better explanation for why it's actually evil besides some variation of "the military is bad"


Because they oppose the drone warfare program, which is killing noncombatants in nominally allied countries.

https://m.huffpost.com/us/entry/us_561fafe2e4b028dd7ea6c4ff

https://www.theatlantic.com/politics/archive/2016/03/the-oba...

It’s not the military that is evil, it’s this particular program in the eyes of many.

http://www.bbc.com/news/world-us-canada-24557333

Disclaimer: I am also fortunate to work at Google and my opinions are my own.


>Because they oppose the drone warfare program, which is killing noncombatants in nominally allied countries.

I don't understand this logic. Drones are a weapon. They are not any more evil than any other weapon that the military uses. They aren't killing people autonomously. There is a human pilot behind every death. The pilot just happens to be hundreds or thousands miles away from the aircraft. Would we be fine with the military killing the exact same people if the pilot was simply on the aircraft that launched the missiles or bombs?

The distinct advantage of drones is to safeguard the lives of military personnel by not putting them in harms way. You can make an argument that this fact makes using military force politically easier, but I think the blame for that is once again on the military/political leadership and not the drones themselves.

If your problem is with killing people, it should be with the people who ordered the killing and not the specific weapon that was used to carry out the killing.


"If your problem is with killing people, it should be with the people who ordered the killing and not the specific weapon that was used to carry out the killing."

Drone strikes, as they are used, are a more evil way of taking out identified targets of other nationalities. I find the concept of a nation state summarily assassinating people a bit troubling, but drone strikes make it even worse by making it imprecise and cheap.

Drone strikes without on the ground operators create huge collateral damage [0].

When operating in a civilian area, it's far less precise to handle the targeting based on a video stream and to send in a hellfire missile rather than have on the ground operators identify the target and pull the trigger.

Effectively, they are using civilian casualties as credits to buy military lives and convenient and rapid force projection capabilities. Any calculus that considers innocent lives as accepted collateral - when you have other options that are just more expensive or difficult can be considered evil.

[0] https://en.wikipedia.org/wiki/Civilian_casualties_from_U.S._...


Do you think that if part of Google's work were to make it more precise, would that be less evil?


If they could make sure they are killing only the guy they wanted, and act on that using pinpoint precision?

Uh, actually, now that you mentioned it, that would be even more scary.

That would be totally consequence free assassination device. At least the civilian casualties put some lid on the tendency to wantonly fly around in glorified RC planes murdering people.

The way I see it, if you are intending to assassinate a person (i.e. kill them not part of a military engagement) you could at least have the decency to have operators on the ground, with visual range to the target to pull the trigger. That's skin in the game. That's proper balancing of the calculus.

When you remove the chance of your own guys getting hurt, you are lowering the threshold to apply deadly force.

The whole point of due process and so on is to guarantee human rights for all and more importantly, put checks and balances on the actions of the political leaders.

When it suddenly becomes "ok" to kill people in automated fashion around the globe, that's a serious threat to all civil liberties regardless if one is a US citizen or not.

I think a free and open society is the only reason US has any credibility in claiming a moral high ground and things like eliminating targets on the disposition matrix [0] without due process erodes that high ground quite effectively.

[0] https://en.wikipedia.org/wiki/Disposition_Matrix


Every military technology from bows and arrows to GPS is doing exactly what you are objecting to by lowering the danger to the attacker and increasing the success of a potential attack. Drones are an incremental step in that process. It just seems strange to me to draw the specific line at drones when we are seemingly fine with other advancements that are almost identical in effect like stealth technology. Once again, my argument is not that drones are good. My argument is that drones are not inherently more evil than other weapons.

I completely agree with your last three paragraphs. But once again, none of that is specific to drones. It is true whether the trigger is pulled by special forces soldier from feet away, from an manned aircraft thousands of feet in the air, or a pilot in trailer hundreds of miles away.


I accept a global superpower sometimes assassinates people, and perhaps sometimes for good reasons.

My main opposition is to the factory like efficiency in which these extra-judicial executions are carried out by the drone pilots in a cost effective manner.

It gives me associations with other human organizations in the past which have carried out their mission with cruel detachment and efficiency.

I can't really put my finger on it, but it's just something I feel Stalin or Himmler or Robespierre would have totally agreed with.

Automated elimination of the enemies of the state? Wonderfull!


I used to feel similarly about guided missiles during Operation Allied Force (when the US inadvertantly struck the Chinese embassy in Yugoslavia).

My opinion on the topic has not translated into a reduction in automated warfare as a policy. On the plus side, independent of my opinion, technology has moved forward so that given similar circumstances, the embassy would at least have a chance that a drone pilot would realize during visual confirmation that the target has a big Chinese flag out front.


> Drone strikes, as they are used, are a more evil way of taking out identified targets of other nationalities. I find the concept of a nation state summarily assassinating people a bit troubling, but drone strikes make it even worse by making it imprecise and cheap.

There's no significant difference between a drone-strike and a conventional airstrike by a manned aircraft, at lest when it comes to targeting, precision, etc.

Drone-strikes might even be better than conventional airstrikes because the pilot may have less workload (therefore more focus) during the strike part, meaning fewer errors and mistakes.


> There's no significant difference between a drone-strike and a conventional airstrike by a manned aircraft, at lest when it comes to targeting, precision, etc.

But there's a significant difference in the sense that drones are the only way these strikes are conducted. Manned strikes would be more risky, since they expose a pilot, and less practical because manned aircraft can't be "parked" over a target for anywhere near as long as a drone. And whatever the legality of the situation, the political reality is that manned strikes look substantially more like an act of war.

So the difference is that many of these strikes wouldn't happen without drones. It's not about how the bomb is delivered, it's about whether we're assassinating people without a declaration of war in the first place.


Sounds like your issue is with the current political system not with drones. Direct your anger to the appropriate places.


I'm not angry at drones, any more than someone who opposed the V2 bombing of London was angry at rockets.

I have been against politicians who support this, though every presidential nominee for at least 12 years has supported it. But political opposition isn't limited to the voting booth. If a horrible act can be blocked by means reduction then dissuading people from enabling the means is a way to change politics.


Drones have the ability to be more precise than a typical forward air controller plus manned airplane as a drone can be above a target for much longer and that long duration video surveillance gives the opportunity to find the best time for a strike and to verify that it is the target.

You claim that a person on the ground is going to get better identification but they need to be hidden rather far away so that they aren't noticed and aren't hit. This reduces the typically achievable accuracy substantially from the theoretical maximum. Manned aircraft can't stay on target because of refueling needs, speed, and how loud they are, so you have very narrow windows of opportunity. Most ordnance from manned aircraft are also much larger than those used by drones, so manned aircraft typically have larger blast radius and can kill many more unintended people.

So while their can be narrow specific situations where FAC + F35 can deliver more precision, overall a drone provides much more precision and less danger to others.

This is obviously an emotional situation and many people don't believe that it is valid to kill specific individuals, as many others have claimed on this thread.


Sorry, I was imprecise. I did not mean it would be better to use conventional aircraft. I meant to claim that it would be better to use a human with a gun to do the assassination - at least when operating in an area with potential for civilian casualties. Use of heavy ordnance in an area with civilians should have no excuses if there is the alternative to use human operators.

I don't care how far up the rank of Isis or Al Kaida the person targeted is. Risking civilian casualties should not be acceptable.

A combat in a war and an assassination are two different things - I was condidering an assassination which by nature should be fine grained.


So the people who build the Death Star are innocent? The guilty don’t feel guilty, they learn not to.

I wouldn’t personally want to work in weapons development, but I certainly wouldn’t want to work in weapons development for a military that as a result of its offensive use of its weapons is making children in the third world afraid of blue skies.

But I have a gmail and I use google maps, so I’m not really in a position to judge.


> The guilty don’t feel guilty, they learn not to.

Along with the usual "If I didn't do it, someone else would".


Also known as 'the ultimate cop-out'


Death Stars don't kill people. People kill people.


... ideally with Death Stars, millions at a time :D


I know you are being a little facetious with the Death Star line, but I think that bares a distinction because it would be a WMD and part of its intended purpose is to kill civilians. Likewise in our world developing indiscriminate killing WMDs like bio-weapons deserves extra scrutiny. Drones are not indiscriminate killing machines any more than manned aircraft. If they are used indiscriminately, the problem should be with the leadership ordering their use in that manner.

My overall point is that drones are not a unique class of weapon. You should either have a problem with Google working on a weapons program or not. Working on drones is no worse than working on manned aircraft.


It’s “purpose” is to protect the empire - it is a tool.

It’s intended as a deterrent.

Repeat indefinitely for all “tools”

(There’s an entire subreddit dedicated to it. The empire did nothing wrong)

You can Always justify making a bigger better gun or “tool”.

Don’t make tools you wouldn’t want used against you.

Tech engineers in particular are more than normally exposed to the ideas of weapons and responsibility of creators for their creations.

Dodging that is the opposite of being a coder/nerd.


Or, perhaps, the optimal survival strategy of a nation-state is to make the thing you don't want used against you until it becomes so onerous that international standards move to ban it.

Gas warfare was banned after WWI, but during WWI, both allies and central powers used it before it was banned. Nuclear weapons are banned... After the Allies used them twice.

International bans on intolerable weapons are a great idea. Unilateral disarmament of a useful weapon with no international ban may be suicidal.


Drones can be produced and deployed in higher quantities than aircraft, and with AI you won't necessarily need one man behind each drone. So yes, it is potentially worse than aircraft, for the same reason a police/army of bots could potentially be much worse than a "conventional" one.


>Drones can be produced and deployed in higher quantities than aircraft

Once again that is a political issue. If someone finds a way to suddenly reduce the cost of bullets by 50%, the response from the military shouldn’t be to buy twice the number of bullets.

>with AI you won't necessarily need one man behind each drone

The article denies that is how the AI would be used.


Just a matter of time.


Regrettably, in the world as it is, we need--extraordinarily--competent people working on our weapons of mass destruction, too (in particular, nuclear command and control). Not a job I'd want, and not one I'm happy exists, but one I'm sure grateful someone is doing.


Drones are a worse weapon than a man on the ground with a gun, because they have asymmetric risks. The country with the drones doesn't risk civilian backlash from casualties. They get to start wars without much political pushback. And that means they can start wars for less justifiable reasons.


Mostly curious but do you feel the same way about body armor? Body armor reduces casualties as well and makes war seem less risky?


"We kill based on metadata"

Phonenumbers are the target identifier.


It is even looser than that, activity profiles and a graph of agents that fit some shakes marker.

They literally don't know who they are killing, only people engaging in certain patterns of behavior.

The whole thing is just a vaguely statistical murder game.


I had completely forgotten about that quote. Thanks for the reminder.


> There is a human pilot behind every death.

Google's work will now ensure that there won't be. Even with a 100% sure target identifier, it will now correctly hit the right target just as much as it correctly demonetizes youtube videos.


> Would we be fine with the military killing the exact same people if the pilot was simply on the aircraft that launched the missiles or bombs?

No.

> If your problem is with killing people, it should be with the people who ordered the killing and not the specific weapon that was used to carry out the killing.

In the context of this discussion the problem would be with the weapons designers.

That said, to imply drones are just like previous weapons is naive. The ethics of warfare change over time as the weapons used to wage it change. And death in war being even more abstracted away via drones (a process began with the first thrown rock and continued up through the carpet bombing of WWII and Vietnam) is certainly worth discussing.

If nothin else drones raise the question of scale - they allow for a lot more targeted killings.

Until the day war becomes an ugly memory we should all do as much as possible to examine and understand the forms of dehumanization it produces.


And if you disagree with the people ordering the killing, you might not want to fulfill their wishes for new weapons improvements, knowing that they'll use them for killings you disagree with. Or if it makes killings politically easier for them, then it's in your interest to not do that. Why is that unlogical?

Let's say your neighbor starts poisoning all the pets in your neighborhood, would you make poison for them because it's not the poison doing the killing?


What if the one that ordered the killing is an AI system designed by Google? Who is to blame? Remember that clients don't know the full implications when they ask for a service.


> I don't understand this logic. Drones are a weapon. They are not any more evil than any other weapon that the military uses. They aren't killing people autonomously. There is a human pilot behind every death. The pilot just happens to be hundreds or thousands miles away from the aircraft. Would we be fine with the military killing the exact same people if the pilot was simply on the aircraft that launched the missiles or bombs?

Not "fine", but I think "maybe a little bit less horrified" might be accurate, because a remote video feed apparently makes it even harder to tell what you're shooting [1]:

What the public needs to understand is that the video provided by a drone is not usually clear enough to detect someone carrying a weapon, even on a crystal-clear day with limited cloud and perfect light. This makes it incredibly difficult for the best analysts to identify if someone has weapons for sure. One example comes to mind: "The feed is so pixelated, what if it's a shovel, and not a weapon?" I felt this confusion constantly, as did my fellow UAV analysts. We always wonder if we killed the right people, if we endangered the wrong people, if we destroyed an innocent civilian's life all because of a bad image or angle.

Another former operator said similarly [2]:

Bryant stared at the screen, frozen. “There’s this giant flash, and all of a sudden there’s no person there.” He looked over at the pilot and asked, “Did that look like a child to you?” They typed a chat message to their screener, an intelligence observer who was watching the shot from “somewhere in the world”—maybe Bagram, maybe the Pentagon, Bryant had no idea—asking if a child had just run directly into the path of their shot.

“And he says, ‘Per the review, it’s a dog.’ “

Bryant and the pilot replayed the shot, recorded on eight-millimeter tape. They watched it over and over, the figure darting around the corner. Bryant was certain it wasn’t a dog.

Though in any case it seems like quite an awful argument that distancing yourself from whom your killing should be fine because killing-isn't-okay-anyway-so-whatever.

[1] https://www.theguardian.com/commentisfree/2013/dec/29/drones...

[2] https://www.telegraph.co.uk/news/worldnews/northamerica/usa/...


>”Would we be fine with the military killing the exact same people if the pilot was simply on the aircraft that launched the missiles or bombs?”

No, that wouldn’t matter obviously. Drones are only relevant because that’s what they are using: if it were helecopters and google were being asked to make computer-vision enhanced helecopter dashboards to make the program more efficient and more lethal people would have similar concerns.

This program in particular is troublesome because they have decided since it is out of self defense[0] it can be done outside of a congressionally declared war without public oversight[1] and to even target American citizens if the executive branch alone secretly determines they are linked to Al-Qaeda or another terrorist organization[2]. It has also been documented hitting first responders to ensure a target is not rescued [3] which would be called a war crime if anyone else did it.

Project Maven expands this new (in my view, extralegal) kind of warfare by making it more lethal and more automated. It scales this program up in a way that, say, making bullet proof vests for soilders or even tanks does not, because those are used in old fashioned congresssionally declared wars.

[0] https://www.justsecurity.org/wp-content/uploads/2016/04/Egan...

[1] https://www-m.cnn.com/2012/08/15/opinion/oconnell-targeted-k...

[2] http://investigations.nbcnews.com/_news/2013/02/04/16843014-...

[3] https://www.theguardian.com/commentisfree/2012/aug/20/us-dro...


In the immortal words of Rob Delaney, “Guns don’t kill people. People who say “Guns don’t kill people” kill people. With guns.”

/s/guns/drones


"there is a human pilot behind every death."

That statement won't last long term.


No, it'll become "there's a Google programmer behind every death".


The problem is that Google wants to give the military better weapons to kill people, knowing full well that they're intended to be better at killing people, and they're using the data we voluntarily gave them under the illusion that they wouldn't do evil things with it.

Evil things like making better weapons for people who kill innocent civilians.


> They aren't killing people autonomously. There is a human pilot behind every death.

Oh, so we don't need these weaponized AI developements then? But I suspect that with the AI you could more easily escape responsibility by blaming the software. Who faces the consequences when an autonomous drone bombs a school? "Oops, sorry, it was a bug."


More than the blame, it will dehumanise those deaths. Like a tree falling in a forest....


Weapons are not all equal. Chemicals can be used as weapons. Most of society generally agrees not to participate in chemical warfare. Some weapons are more evil than others.


> Most of society generally agrees not to participate in chemical warfare.

because of uncontrollable collateral damage. Same with mines that can't be deactivated.

Drones, on the other hand, promises precise strike, with little to no collateral damage.

If killing of "enemy combatants" can't be stopped, why not make it as precise and accurate as possible, and prevent collateral damage?


You do realize that drones are not precise at all? Plenty of collateral damage from them.

That is they are precise in targeting a specific location but they blow up everyone at that location?

Let's not even talk about double tap drone attacks. The second drone kills everyone attempting to help the wounded. Talk about do no evil...


I've often heard that civilian casualties would be higher without drones. First result on google seems to support it: http://www.slate.com/articles/news_and_politics/foreigners/2...


Drones enable perpetual war. For instance, the US has been continuously launching drone strikes against its ally pakistan since 2004. I don’t think that’s even the current longest running drone conflict, but I can’t remember which one is. Yemen is probably biggest humanitarian military disaster at the moment, but that war is only in year eight.

Here is a list: https://en.m.wikipedia.org/wiki/Timeline_of_United_States_mi...


> What is "evil" about working with the military or developing weapons?

In case your question is serious: There are still some of us who believe in peace as an end-goal for us humans, as a race. See Kant's "Perpetual Peace: A Philosophical Sketch" (https://en.wikipedia.org/wiki/Perpetual_Peace:_A_Philosophic...) for a more detailed view on the subject. "Working with the military or developing weapons" are two activities that strongly work against achieving that goal (even though there would be some who would say that in the short term they do the opposite, i.e. that the military's purpose is to bring peace).


The military's purpose in modern times is not to bring peace, it is to preserve peace. I find it an important distinction, as "bringing" peace sounds like the military's purpose is to force outsiders who may be viewed as unpeaceful to adopt new culture which is viewed as more peaceful. To "preserve" peace implies, at least to me, that the military is focused on the peace of the home population, and may take action elsewhere only in preservation of that home population's peace.

Otherwise you have the crusades.


> "[B]ringing" peace sounds like the military's purpose is to force outsiders who may be viewed as unpeaceful to adopt new culture which is viewed as more peaceful.

Well, that definitely seems to be the purpose of the US military.


It was even the stated goal of neoconservatives launching US military actions, as 'nation building' and 'spreading democracy'. Honestly, "don't say bringing peace" feels alarmingly close to "obviously we did this, but don't admit to it".


The idea of the military preserving peace is like thinking a group of horny boys will preserve virginity.

The US is the most aggressive military on Earth. There's no peace in its presence.


That's a hypothesis without much backing, history shows an interventionist hyper power reduces conflict.

We've seen Pax Romana, Pax Britannica, Pax Mongolica, and now Pax Americana.

There could be shining utopia without military conflict in the future, but personally I'd rather not take the risk during my lifetime. Many equally matched opponents has historically resulted in brutal warfare.


Leaving aside Pax Americana to look at more settled history: none of those events were peaceful in global terms. There's a reason the general term is pax imeperia.

The Pax Romana was 200 years of relative quiet after 700 years of effectively constant war, and followed by another 200 years of war. The thing most historians stress is just how far from peaceful it was. In practice, Roman-held lands were at peace because all resisting inhabitants were already dead, and peace on the border constituted a period of slowed conquest and retrenchment. As soon as the political situation deteriorated, both domestic fighting and foreign glory-seeking resumed.

The Pax Mongolica followed what was per capita the single bloodiest war of conquest in human history, wiping out perhaps 10% of Earth's total population and killing literally every person in several nations. The conquest stopped basically thanks a succession crisis. In return? About 200 years of relative peace and good administration, before the black plague fragmented the khanate. (And if we want to get cynical? The improvements to trade and travel under super- and hyper-powers are a key vector for pandemics. It's not sheer coincidence that disease ended the khanate.)

The Pax Britannica... well, it was accomplished mostly with vicious oppression and butchering local populations until resistance stopped. In return, we got about a century of quiet empire, ending in a global war and yet another world-shaking pandemic (this time, the Spanish Flu).

The history of imperial peaces is one of temporary quietude after an empire has killed off the opposition and reached a pause in its wars of conquest. Fighting remains on the edges, and the peace usually ends in yet another bloodbath as suppressed violence is unleashed - and often as disease wipes out large portions of the empire. It's peace by comparison, not absolute quality.

(The one thing to be said for the Pax Americana is that it's been comparatively bloodless. Even across a double-dozen shadow wars and conquests, the act of conquest was vastly gentler than its predecessors - but the history of such things does not inspire hope.)


Thank you for this response. I don't argue that the way to that peace hasn't been historical bloody, but the pandemics can't be blamed on the empires. Those were inevitable as population and trade grew, a byproduct of urbanization.


> the pandemics can't be blamed on the empires. Those were inevitable as population and trade grew, a byproduct of urbanization.

Fair point. I guess my thought was that empires increase the risk of pandemics, since they tend to come alongside expanded travel and trade. Periods of peaceful empire see population and trade increase faster than technological growth would predict, and disease becomes a frequent limit on their durations. But it's certainly not an intended consequence, and the risk would be rising with urbanization and technological growth regardless.

It's something that worries me about global connectivity, and I think it's a widely underestimated risk. But it's hardly a risk I could hold against Pax Americana; in an era of consumer plane travel nothing short of utter isolationism would constrain the risk. At least in a relatively peaceful world we have proactive monitoring and countries that can work together on treatment and vaccines.


Not gonna say the US military isn't aggressive, but I think it is at the very least debatable that China and Russia are less aggressive.


It's worth adding that this idea is why we have the somewhat paradoxical concept of laws of armed conflict. One of the express goals of things like the Genvea conventions is to ensure the prompt re-establishment of peace.

You don't end war by starting with the assumption that there aren't and won't be anymore before you're successful - or you would've been already.


Si vis pacem, para bellum?


It's interesting that this line of peace even seems to be believed which shows the triumph of propaganda. Millitary in all ages has always existed to either further or protect the economic interests of the people in power.


No, the purpose of the military in modern times is to push, enforce and maintain the interests of the hyper-capitalist ruling class.


> "bringing" peace sounds like the military's purpose is to force outsiders who may be viewed as unpeaceful to adopt new culture which is viewed as more peaceful.

You mean exactly like US Military did in Iraq, Afghanistan and Syria due to "terrorism" and in various Central/South American countries due to "communism"? Bush specifically supported US "bringing" (his words) peace and democracy to the Middle East.

This is not even the point. You're just reasoning very poorly. You found one "good" aspect of massive armies and then showed it as evidence of their non-evilness. But if you think more, you can see that massive armies can easily do bad to the world. The fact that they haven't done yet (which, they did) does not change this fact. If tomorrow US turns into the Fire Nation, you cannot stop it, I cannot stop it, no one can.


"There are still some of us who believe in peace as an end-goal for us humans"

I'm sorry, but I think in our current world historical evidence is stacked against you.

Humans are a political species. The ultimate form of political power within a specified region is to have the most capability for the violence within the region.

Hence, if a nation state would secede all capability for violence it would effectively yield the ultimate political power to someone else.

Within this framework a state needs some force projection capability within it's territory or it will sooner or later lose it's political power.

Projecting force outside of the nation states borders or using the force to subjugate it's own population is a different thing all together. You can have too much violence, but you can't really get rid of it with the current psychological configuration of the human species.


This is the theory, yes, and to a point it's even true, but the US military is _not_ being used in this way. There is no challenger for "ultimate political power" within the US region. Their spending and investment is well beyond what is necessary to keep up with the "competition", of which there is effectively none. It is merely the continual expansion and growth of an already vastly overqualified force which hasn't actually been needed to enforce "ultimate political power within its region" in decades, but instead has busied itself on a series of unpopular, unproductive, and devastating wars which have destabilized a region and are now causing problems worldwide.

Making this violence engine more powerful is evil, even if one accepts that a military force is 100% necessary.


There is no challenger for "ultimate political power" within the US region

How quaint - "regions" don't exist with modern weapons. To say the US doesn't have to worry in it's "region" is laughably naive.

Their spending and investment is well beyond what is necessary to keep up with the "competition"

Also quaint. China is able to leapfrog decades and trillions of dollars of US spending on defense due to modern technology advancements. One only needs to look at terrorism and militants fighting in the Middle East to realize that spending does not equal capability or effectiveness. In war games, China's sheer numbers overwhelm a technologically superior foe at a fraction of the cost.

hasn't actually been needed to enforce "ultimate political power within its region" in decades,

It is literally being used every second of every day to project power and maintain stability around the globe (not to mention daily active engagements over Syria/Iraq/Afganistan etc.). Just because we haven't had a "traditional" war in (years) doesn't' mean this power isn't a vital part of defense.

*Making this violence engine more powerful is evil, even if one accepts that a military force is 100% necessary.

Who fill this power vacuum if the US steps down? Will the world hold hand and sing peace songs if the US reduces military spending and activity?


While in general I agree with your realpolitik viewpoints this example was perhaps not the best thought out when rationalizing the application of US military resouces:

"not to mention daily active engagements over Syria/Iraq/Afganistan etc."

That's like saying it's ok to first smash windows and then charge for replacing them and being happy about how good you are in mending broken windows. It would be better no to break the fucking windows in the first place.

While US is a stabilizing force in eastern Europe and the pacific, it has totally fucked up the lives of millions in the middle east area by it's interventionist escapades. The only charitable interpretation would be that US views world history through the narrow disfiguring lense of it's own revolution which was an astounding feat by any measure. It's as if they all believe that if you just give rebels guns then a George Washington will be magically summoned to lead the country in good order to a proper form of post-despotic administration.

Of course there are lots of other reasons to interfere in other countries, but given how easy it seems to be politically to engage in these activities I can't figure out anything else masking common sense.


this example was perhaps not the best thought out when rationalizing the application of US military resources:

I agree :) However, my point was that these regional conflicts, with "tiny* countries are anything but. Superpowers are funding these wars, and it's all part of a larger game of hard and soft power. Saying the US's only worry are nukes, is just ... insane.

"While US is a stabilizing force in eastern Europe and the pacific, it has totally fucked up the lives of millions in the middle east area by it's interventionist escapades"

Excuse my vulgarity but shit has been fucked in the middle east for over 100 years. The US din't break any windows, if anything they bombed these broken windows into dust, along with everyone else.

In 10-20 years, this same statement will apply to the Pacific as China becomes more "assertive". To say the US "got it right" here is only because China's aggression or "assertiveness" is so new.

"It's as if they all believe that if you just give rebels guns then a George Washington will be magically summoned to lead the country in good order to a proper form of post-despotic administration."

I agree this is what is "sold" to the US public and the world. However the real motivations are always monetary, even in the case of the American Revolution.


I agree with all you wrote above :)


>It is literally being used every second of every day to project power and maintain stability around the globe (not to mention daily active engagements over Syria/Iraq/Afganistan etc.).

Yes, Syria, Iraq, and Afganistan are good examples of the exact opposite of that.

You're not wrong, _in theory_. A military is necessary. Maybe even a military as large as the current one is necessary. Maybe even a _larger_ military is necessary.

All of those things can be true, but these drones are not being built to dissuade China from breaking their policy of isolationism and attacking. The growth of the US military is not happening to "maintain stability".

The theory is sound, and the propaganda paints a good picture, but the reason why adding more power to that system is evil is not just because it's a military and militaries are somehow intrinsically wrong. It's because that military is being _used_ in evil ways.

Again, to make it super clear: The US needs a military, and it needs a big one. The idea that humanity could achieve peace by just losing all the militaries is nice, but unfounded. It is possible for all of these facts to be true, and yet for it not to be true that literally anything a military does is thus moral and just, yeah?


"the US military is _not_ being used in this way."

I agree to a point. US is "hooked" to war as it feeds the profits and the subsidies of the military industrial complex. Especially in the eastern mediterranean area and middle east US escapades have been hugely destabilizing.

However, given the current world stage in total with such jolly players with extraterritorial ambitions as Russia and China, I would much prefer that US was ready to aggressively project force than not. As a counterbalance.

Especially as I live in Finland, I much prefer having US backed Nato force as a counterbalance to which ever clique in Kremlin has control of the Russian military antics.

This is anything but clear cut.


>This is anything but clear cut.

Definitely agreed.

For our current world conditions to hold stable it is important that at least one of our rich, basically politically aligned western countries has a worldwide military force. The US military has gone way past merely that point, however, and that is the point at which this starts getting "evil".


Let's first make the other NATO members more powerful before we keep increasing the US firepower.


This misses how we established a peaceful western Europe. The US provides military protection to cover Europe so that countries like Germany don't need a big military. This keeps France and Germany from contemplating shooting at each other because they don't see a threat there anymore.


> I think in our current world historical evidence is stacked against you.

There's a lot of factors stacked against us. It doesn't matter, we will continue moving towards this goal.


> In case your question is serious: There are still some of us who believe in peace as an end-goal for us humans

Refusing to assist the military develop weapons doesn't really help that goal. A weak(er) military with a lesser technological edge could actually encourage war, as countries that aren't as bound by peace-seeking principles exploit the opportunity to wage war for their benefit. Basically, military weakness is destabilizing.

The refusal line of thinking makes a little more sense if you assume (in this case) the US will be perpetually militarily dominant, which I think is a bad assumption. It's also kind of racist assumption, since it assumes other nationalities are incapable of technologically leapfrogging the US or negating its military advantages.

That's not to say there might be good reason to not want to work on particular weapons systems that have specific moral problems to them, but I'd feel like you'd need a lot of detailed information to make those calls.

I think the most defensible type of objection to assisting the military is at the personal ethical/religious level.


Definitely was serious =]

I also hope for peace but I do not believe there will be peace without our military. I suppose it's a difference of ideals but I do not see why it would be considered "evil"


Some people will see the evil as the country you are working for is known as the one of the primary aggressors and war initiators of the history that continues to kill directly and indirectly hundreds of thousands of people in other countries bringing them "democracy" where no one asked them for anything.

Others will just see it as a necessity and that every company should work and help the government when they are asked to defend the country and maintain the global status quo.

for some that conduct is a virus, for others just a symbiosis, but it's easy to see where they come both of them.


Well, what’s the alternate option? If the US hadn’t been the “police” of the world. It could have been Russia or China or India, would we rather have had them? And don’t say Europe unless you consider France and the U.K. and others and their things.

We don’t live on a vacuum. If you create one, someone will fill it.

I am loath to think of how much worse things could be despite the idiotic wars of Obama and W., I think the world is better off with the US guiding things rather than others.


Absolute ridiculous false dilemma. It is not that if the US hadn't chosen to invade Vietnam and to kill everything that moves someone else would have, the US just opted to do that on their own. And it was not like they were there to free the people (from whom anyway, they were not enslaved). There was no vacuum to fill, the same goes for Iraq, Afghanistan, were the US created the vacuum that was afterwards filled by the now crumbling ISIS (though there at least the people were not free before).

And that's the core of the issue, and that is why these political comments are on point (which otherwise would be way too political for HN). By cooperating with the imperial US military Google is directly supporting one of the most evil forces in the world, by supporting the drone program they are directly supporting a program mainly used to kill civilians in the middle east. There is nothing good about this.


I don't think most people in the US will care about such issues until there's a program similar to the 1033[0], where hand-me-down autonomous drones are deployed agaisnt the civilian population stateside and weddings start getting double-tapped by MQ9 descendants.

Loin des yeux, loin du cœur.

[0] https://en.wikipedia.org/wiki/1033_program


"And that's the core of the issue, and that is why these political comments are on point... By cooperating with the imperial US military Google is directly supporting one of the most evil forces in the world"

Do you mean the US military is one of the most evil forces in the world or the US itself as a geopolitical actor?


I mainly meant the military there, however it is debatable how much you can separate the two. But that discussion would be imho too OT for here.


If you boil down your statement into "I would rather have the Americans kill these people than the Chinese" you will begin to see the real meaning of what you are saying.


Maybe I’d ask the Vietnamese, who we fought, who they’d rather have, or the afghans who fought the USSR, who they’d rather have to get the meaning of what I’m saying. You can also ask the Pakistanis if they’d rather have US or India, go ahead.


"Who would you rather be oppressed by?" isn't a very convincing argument that you're on the right side of history.

Just because there exist worse oppressors than you doesn't mean you're not evil as well.


Why are these posed as either/or? The Vietnamese/Afghan people would rather have either side than the dual sided proxy war they received.


Oh come on.

The US established itself as 'the world police' and built 900+ military bases around the globe to protect its own financial interests at the point of a gun.

Let's not pretend it does any of this shit out of altruism.


In many cases the countries have asked for the bases though. This is still happening today https://www.armytimes.com/news/2018/05/29/why-poland-wants-a...


False dilemma fallacy


The least fallacious fallacy next to the Slippery Slope fallacy.

If it's not US, Russia or China on top, then it is somebody else.

What had the Romans ever done for us?!


I do not think so, in this case, if we didn’t become hegemonic someone else would. Look at the Middle East and all the jockeying around within. Same for East Asia. The question is given someone will be stronger and exert power, who in a game theory framework would exercise their dominion with the least negative consequences. I’m confident most people would agree that the world is better off not having succumbed to Comintern communism.

But regardless, what were the plausible alternative realities that you think were viable options given what we know?


I also work at Google, my opinions are my own as well.

I don't work in the US. I'm not a US citizen. I don't support the actions of the US military. Hell I _really_ don't want to be involved with the weapons development branch of the US military. That is one reason why me and a lot of Googlers are opposed to this.

Would you be happy for Google to work with the military of another country you have zero control over that has very questionable motives?


Think of it the other way, if Lockheed martin were to start a Gmail, GDocs equivalent tomorrow, would you use their services knowing how closely they work with the military? Would global users be comfortable using it? The whole PRISM fiasco isn't far from this. What's stopping the Pentagon from buying access to users data? It's a trust thing, imo.


>Think of it the other way, if Lockheed martin were to start a Gmail, GDocs equivalent tomorrow, would you use their services knowing how closely they work with the military?

Well yeah. The term "military grade" is one that tends to sell products.

>What's stopping the Pentagon from buying access to users data?

If there's no end to end encryption, I assume some three letter agency already has access.


I make use of GE related products all the time and GE does military contracting. I fly on Boeing aircraft and Boeing makes military aircraft.


Yeah but GE isn’t in the business of collecting personal data about you and selling it. Google business is data not jet engines


> if Lockheed martin were to start a Gmail, GDocs equivalent tomorrow, would you use their services knowing how closely they work with the military?

Them working closely with the military would not be the reason I would choose not to use their service considering that almost all emails already go through NSA servers.


> What is "evil" about working with the military or developing weapons?

That your work will be used to kill people?

I'm kind of surprised that needs spelling out.


But perhaps the idea is that of deterrence, i.e. that you don't actually plan to use the weapons unless someone forces you to use them.

See Mutual Assured Destruction.

Or you can use them defensively. Killing is evil, but if someone points a gun at you, would you still feel evil if you killed them instead?

Weapons can also neutralize weapons (instead of people), e.g. missile interception.

Further, it might be possible to develop effective non-lethal weapons (military equivalent of taser).

Finally, developing weapons can also mean that weapons become safer. For instance, less collateral damage. Or weapons that can be controlled better, so they can't be used by people who shouldn't use them.


Do you believe that having a military is evil? Serving in the military is evil?


That's an intellectually dishonest question and you know it, you ought to be ashamed of yourself.


How? It's a legitimate extension of the previous thought. If it is viewed as evil to develop instruments that would kill people, is it also evil to use those instruments to kill people? Likewise, if no one developed instruments to kill people, the military would be an ineffective organization at the task that it's responsible for, and thus would the organization also be considered evil?


Your technology will ultimately be used to kill people. Bear in mind that the US electorate showed itself capable of crowning a unhinged TV show host as its commander in chief.


I can disagree with my country's politics and still see value in my country's military.


For context, read the testimonial of a drone operator.

https://www.theguardian.com/world/2015/nov/18/life-as-a-dron...

Now imagine removing decision making from drone operators and making the thing dumb and autonomous, so a computer vision based solution just shoots anything resembling its target.

If shit goes south, just blame the machine. It's nobody's fault. It's a plausible deniability paradise.


I sometimes wish the USA would have learned the same lessons from WWII as e.g. Germany did:

See, if your military or government is killing people, and you work for them, assist them, or let them, you are also killing people. We have a saying: "Imagine it's war and nobody shows up".

I fear this discussion is about if you follow the narrative that US military drone strikes are a purely defensive measure, unavoidable, or serve some greater cause. Large parts of the general public outside of the US at least are of the opinion the US military and associated mercenary companies have been (and still are?) killing (innocent) people on the other side of the world, for dubios reasons. Helping them build weapons, under this assumption, equals helping them kill people.

Even the most accurate AK-47 is intended to kill people.


The comment that you quoted is missing the point a little bit. We are talking about using AI to power weapons. That is uncharted territory and a decision that should be taken lightly and be done behind closed doors.

You have to understand that many people both at HN and at Google are not Americans. The United States is founded on great democratic values, but in the past sometimes failed to uphold them in relationship to other countries (dropping an atomic bomb on Japan, the wars in Vietnam and Iraq, torturing in Guantanamo and the NSA's prism program which directly affected Google).


One issue is that this a lot different than an aircraft carrier cannon. Those are super expensive and limited in where and how they can be used. Also, the bigger the fear is that we don't know where this will stop. Great, so drones are being used in war for strategic strikes. Given the current administration, how long before that might be used to track down people near the border? How long after that to track down people who publicly criticize the administration? The biggest issue is that this enables an Orwellian society that actually scales and scales quickly.

And for anyone playing at home, the "if not us, someone else" argument doesn't wash. We as a society have to draw a line. Maybe someone else will do it, but we need to fight them and not allow it, not just take our money and put our head in the sand. Just like the backlash about the NSA putting backdoors in routers and the more recent announcement of Amazon enabling a police state by scanning and identifying faces in pictures and videos. I'm worried that this will continue to push us to some pre-crime bullshit but instead of oracles with magic powers, it will be some half ass AI based on factors like race, gender, neighborhood and time of day.


If we're headed to an Orwellian future, drones have nothing to do with it. The fault isn't in a weapon system but is in us.


"Maybe someone else will do it, but we need to fight them and not allow it, not just take our money and put our head in the sand"

China & Russia are doing it. You can't fight them without ... fighting them.


Maybe we could spend our resources building systems that destroy drones instead of an arms race. I'm not saying there are easy answers but being defeatist is not my approach. I wonder what good we could do in this world if we really tried.


Not so much defeatist as it is being a realist about the threats facing you, and the world.

Maybe we could spend our resources building systems that destroy drones instead of an arms race.

That's just another ... weapon ;)


That's fair, it is a weapon. I would argue the difference is that this weapon prevents a 24/7 global police state, instead of enabling one. It will be very interesting to see what this looks like in the next 10 years. I personally already find the Chinese social credit system super scary and worry about an Orwellian life. As always, who will watch the watchers?


> What is "evil" about working with the military or developing weapons?

Watch this video[1] and realizing that Project Maven is contributing and enhancing the capability of the slaughter drones and knowing that current U.S. President(or future presidents) or any command chain above the drone operator( including the operator) can kill anyone (military personnel or not, in a war or not, in conflict zones or not, American citizens or not, yes there was a precedence from President Obama) with or without due process, with Google's AI, doesn't give you chills?

Yeah yeah I know if Google didn't do it someone else will. But for a company who's been claiming "Do No Evil" as motto from early on, I am holding Google at a higher standard - but obviously it has fallen from the moral highground. Hence being evil.

[1]: https://www.mercurynews.com/2017/11/20/watch-out-for-killer-...


Well, "evil" is a point of view of course. You don't even have to be a pacifist to reject working in the weapons industry. To me it's like working for a gambling site or a hedge fund. I'd rather work for a mission that makes the world a better place, not worse.


Working for military by building superior exo-skeletons and armor/weapons for soldiers as well as gear carrying robot-dogs is pretty okay if one realizes the need for military.

Then these soldiers go on site in another country risk their lives and take out specific bad guys without shooting too many bystanders.

You can then argue whether those were really bad guys and whether that specific operation needed to be carried out.

Building drones which are controlled remotely(or even worse by AI) to kill a wedding with kids where possibly a few people were the bad guys is NOT OK.

Drones facilitate political/military cop-outs.

For extra credit you can watch a few seasons of Homeland. Sure it is fiction but the issues are there.

To reiterate: even if you support working for military there is difference on what kind of weapons you work on


I agree that there is a difference on what kind of weapon one works on. I clearly view chemical weapons as both horrible to use and useless as a deterrent. However, I can't claim I have the moral authority to dictate what falls across the line with perfection. Thus I am not going to judge other people's decisions with such passion.


And this is why “don’t be evil” is a hopelessly naive statement.

Some would define US military actions as just and civilian casualties (for example) as unfortunate. Others might define them as outright evil.

Almost nobody thinks they are “evil” there’s usually some kind of justification.


So, withdraw and let them duke it out?

It’s not always clear cut. Trump wants to pull out and people call him isolationist. Obama tried to get involved and lit a powder keg. It’s not easy. W, sure, he messed up.


What is evil about murder? If you honestly don't think that is a bad thing then I will grant you that for you, working with the military is fine.


Here’s a good summation of the inherent moral problems of this project by the International Committee for Robot Arms Control:

https://www.icrac.net/open-letter-in-support-of-google-emplo...



Because better weapons - especially, autonomous weapons - lowers the barriers to starting wars. Instead of starting wars for defense purposes, they get started for resource control reasons, or even for business reasons, for the well-connected.


All wars are about resources.


Disclaimer: I also work at Google

I think that the problem is more specific than working with the military or developing weapons, and it's important to disentangle the more broad concerns with military work versus the concerns about this specific project occurring at Google. I personally would not like to work on developing weapons but it's definitely much harder to defend a position such as "building ships for the U.S Navy is evil".

The problem with Project Maven is plugging A.I. into drones. Sure, you can argue that A.I. today is limited and the technology isn't there to make them autonomous. However, perception/object detection is a huge component of what would be needed as part of an intelligence that can run a drone autonomously.

Then you arrive at basically 2 questions

1. Is Google's participation Project Maven making it easier for the Department of Defense to develop fully automated drones, or will they always leave a human in the loop?

2. Is enabling the Department of Defense to develop automated drones ethical?

#1

Yes, Project Maven enables automated drones. Sure, the Department of Defense could get automated perception technology from somewhere else, but that is in fact what the parent post is proposing Google should let the Department of Defense do. Also, A.I. advances very quickly and if the project continues there may be other ways in which Google is enabling full automation of drones, for example reinforcement-learning based control. I personally do not trust that the Department of Defense will not develop automated drones if they have the capability to do so. This is because they will probably see it as an arms race, and automating drones will serve most of the aims of their organization far too well to be put off indefinitely.

#2

I believe it is unethical to facilitate the development of fully autonomous, intelligent weapons. It's hard to make a stronger case than the video someone else has linked elsewhere in this thread: https://www.youtube.com/watch?v=TlO2gcs1YvM -The potential for chaos and destruction is very high. -The capability to find and eliminate targets will likely precede the ability to make ethical decisions, and human ethical oversight of the kill decisions will be reduced especially if the drones are scaled up For more on this check out http://autonomousweapons.org/


I don't see how the desire simply for defense can avoid developing autonomy in micro-drone interceptors: Oh no, a bad guy drone is flying right towards the pope! Deploy the autonomous interception drone to stop it. No way a human pilot can conceivably defend against this type threat. Which means we need autonomy to protect, and that does lead to the dual-use dilemma.


> What is "evil" about working with the military or developing weapons?

I wouldn't use the word evil, but I feel it is inappropriate, since Google is a global company. We should be more precise, it is working with the military of one specific country, and developing weapons for one specific country (U.S.).

I am also an user of Google services, which also collect my personal data to provide me with more personalized experience. Ok. But what if, hypothetically, a war will break out, and army of my country will fight against U.S. army, which is helped by Google? It just doesn't feel right.


I tend to agree with you, but as a question when would you picture the actions of the US military indeed becoming an overall negative on society? And then at which point would enhancing said capabilities itself also then become undesirable?

It's a tricky question since a strong military is obviously necessary to protect a nation, yet at the same time - our military's purpose has long since shifted from defensive to offensive - and those offensive actions have been of very dubious value at extreme cost, both monetary and in the loss of life inflicted and suffered.


Imagine French or Russian drones flying around remote parts of the US taking out targets that have been classified as criminal by that particular government. And then imagine your family lives in the same remote area. No one should be happy this is happening anywhere in the world.


>What is "evil" about working with the military or developing weapons?

Well, they kill people, see.


And why is that evil?

Are people who join the military evil? Police?


From outside the US (which is where I'm from) the US military doesn't look very benevolent. Do I want my data to be used to help causes I believe to be harmful to human life?

No! It's that simple.


Then shouldn't you be morally opposed to using the Defense Advanced Research Projects Agency network? Or the Global Positioning System where the primary purpose is to position nuclear and conventional weapons globally on Chinese and Russian forces? Not to mention abhorring the use of semi-conductors that were created solely for the purposes of the US Department of Defense?

Be consistent!


So I have to cheer for coups and the desctruction of the Middle East because I appreciate the Internet & GPS?

That's not consistent, that's lacking nuance. On the spectrum of things the US military does, I see the drone program closer to the former than the latter.


Soon or later we will find out why it is a bad idea why a company shouldn't be everywhere.

There is a reason why we separated executive, judicativ and legestative.

But that's a lesson to be learned by blood.


Working on weapons is ok, working on AI controlled weapons however is very dangerous. There are many unsolved ethical questions around this as in, who is responsible for the civilian casualties? Soldiers, pilots, sailors and commanding officers have rules of engagement so I ask, if the rules of engagement are coded and they glitch out, causing the murder of innocent people, who killed them? Is it the software engineering team?

What if the training data of the AI is skewed and furthers the violent oppression of a minority, who's responsible for that? Is it the data collectors? The software engineering team?


Because the drone program is highly unethical and responsible for the killing of countless civilians?


> What is "evil" about working with the military or developing weapons?

If you work on weapons which are then used to kill people, you are complicit in those murders.

AI automation of weapons and surveillance systems is probably going to kill a large fraction of living humans. I'm not talking skynet, just about automation we could easily achieve right now. Anyone who works on those systems will be complicit in the subsequent genocides.


domestic social anxiety that these tools/weapons will just as likely be used to target US citizens. in this political climate it seems tone deaf to ignore those concerns.


> What is "evil" about working with the military or developing weapons?

I guess this question is only asked by those who are confident that these weapons won't be used against them.


"By doing the military's bidding, then Google is now far scarier than Microsoft ever was."

I understand your point, but Microsoft does an awful lot of business with the military.

Last year they won a contract to put every employee in the DOD on Win 10(1).

Not sure it's particularly MORE evil for Google to supply AI support to the Military than for Microsoft to enable the office drones systematically organize weapon shipments, troop deployments, manage body counts, etc.

(1) https://www.geekwire.com/2016/microsoft-wins-927m-support-co...


Google will actively help develop vision systems for weaponized drones. Which, whether Google wants to admit or not, means they are developing systems which are going to target and kill humans.

Microsoft sells a product that literally anyone (but Iran and North Korea) can buy. You can buy it or not. And you can buy extended support for it (or not).

Do you really consider these the same?


Microsoft does an awful lot of business with the military

There’s never been any allegations that everyone’s Word and Excel documents are being read by MS, has there? Or that they read every email sent via Outlook and Exchange? Not the same thing at all


> Because it is a conflict of interest.

No, it's not. That's entirely subjective. It's only a conflict of interest in the heads of people who believe the U.S. military is evil.

The problem is that Google has a different interpretation of evil than yourself, and you're struggling to cope with that reality.


So the quote can basically mean anything you want it to mean? You just have to bend your definition of evil into whatever shape suits you.


There's no bending necessary. Google defined [1] what they meant in their IPO filing.

> We believe strongly that in the long term, we will be better served—as shareholders and in all other ways—by a company that does good things for the world even if we forgo some short term gains

So it boils down to the simple interpretation: Is helping the U.S. military consistent with doing good things for the world? Public opinion polls conducted in 2016 indicated that 78% of Americans trusted the military to act in the interest of the public [2]. So, at least among Americans, I think it's a reasonable conclusion that a majority of people view the military as doing good.

[1] https://www.sec.gov/Archives/edgar/data/1288776/000119312504... (page 32)

[2] http://www.pewresearch.org/fact-tank/2016/10/18/most-america...


For what it's worth I also worked for Google for many years. And I'm hardly a shrinking liberal violet. I think it was stupid to fire Damore.

But the mental gymnastics on display in this thread are depressing.

Is helping the U.S. military consistent with doing good things for the world?

Google has always been a global company with most of its users and employees outside of the USA. You can't even stop yourself talking about "good things for the world" here, even though you then immediately go on to talk about Americans only.

But even if a slim majority of Americans support the Pentagon's drone strike program, the VAST majority of the world hates it:

http://time.com/2986118/drone-strike-poll-pew/

Anyway, we don't need opinion polls in this case. A definition of "evil" that doesn't include assassinating defenceless people is utterly useless and might as well be abandoned. The US isn't at war with Pakistan and Afghanistan is hardly a country to begin with, neither country has ever posed a military threat to the USA. So the drone program has simply become a self-perpetuating bureaucratic machine that eats lives on the flimsiest of pretexts and based on the most damningly absurd 'evidence' (like mobile phone signals), with the obvious consequence of mass deaths of innocent people. For instance at red weddings:

https://www.nytimes.com/2013/12/13/world/middleeast/drone-st...

... not just once, but multiple times:

https://en.wikipedia.org/wiki/Wech_Baghtu_wedding_party_airs...

I find myself so saddened by what Google has become.


> and Afghanistan is hardly a country to begin with, neither country has ever posed a military threat to the USA.

I hate to call out the elephant in the room, but there are about 9,000 people who might disagree with you---3,000 we'd have to assume, because they are dead.

Nation-states don't have the excuse of "We're barely a country" when their territory can be used to launch an asymmetric attack by private terrorists on another nation. At best, that's an abrogation of responsibility. Since the Taliban was in charge at the time, I'd call 'abrogation of responsibility' way, way too charitable an interpretation.


9/11 was not a military attack, it was a terrorist attack, and terrorism is a law enforcement problem. As evidenced by the fact that the USA has been killing 'terrorists' in Afghanistan and Pakistan for nearly 20 years now and yet there are still regular Islamic terrorist attacks in Europe.


The policing and prevention of terrorist attacks on a nation's home soil is a law enforcement problem.

Law enforcement can't handle international issues when the originating nation doesn't have cooperative domestic law enforcement. At that point, the issue where attacks on a nation's home soil are being coordinated in the territory of an unresponsive or hostile foreign government becomes a military calculation.

The military is a ham-fisted tool to replace the job of domestic law enforcement, but in circumstances where the alternative is "nothing," it's depressingly better than nothing.


> with the obvious consequence of mass deaths of innocent people

So the drone program has a very ineffective targeting algorithm, one so bad it has absolutely catastrophic results, and we should be upset they've tried to hire the one company who spent the past two decades refining their world-class targeting algorithms? Nobody targets better than Google.

If we want to shut down the drone program, by all means I support that notion, call your legislators though I have a feeling with the current abomination of an administration in place, it's going to fall on deaf ears for a while. But in the meantime, while we are faced with the reality of a drone program, I'm elated to hear they might be getting some much needed help to stop killing innocents.


If you think a lack of quality neural networks is the reason the USA keeps killing innocent people and covering it up, I don't really know what to say to you.

The reason it keeps happening is because there are entire divisions of the military that are paid to drone strike people and nobody, at any level of the government, has the backbone or strength to say "enough is enough". They already had multi-billion dollar targeting efforts through the NSA, the world's largest and most sophisticated SIGINT operation. It didn't do anything, it just gave them the confidence they needed to pull the trigger more often.

The drone operators kill people because that is their job. Their kill rate will not go down because of a 20% DNN driven boost to image analysis algorithms. It will go up, because strikes on things that looked like terrorists but were actually empty houses, funny shaped rocks etc will go down, freeing up missiles to use more on actual people (their goal).

Calling legislators won't help. Obama claimed he would cancel the program, but was a phenomenally weak leader who was immediately manipulated into a consensus-quo "middle ground" position of, OK, we'll keep drone striking people, but I will personally approve each one: as if his decisions couldn't be completely determined by the people controlling his access to information!

(I'm not even American, by the way).


You didn't really provide an answer to why the USA keeps killing innocent people other than asserting the US military is designed to kill people (which is absolutely true) but I'm not seeing the reasoning why innocents are killed. Nearly everyone in the world agrees that innocents should not be killed, and I think it's fairly safe to assume the US military does not pride itself on the deaths of innocent civilians.

Obama's administration misrepresented the drone program for its "surgical precision" and "ability with laser-like focus to eliminate the cancerous tumor called an al-Qaida terrorist, while limiting damage to the tissue around it" [1]. Seems to me like a targeting issue.

So I guess my question to you is: If the drone program was somehow 100% precise and it only affected people the US government deemed as "bad people", would you still be against it? If so there's no real point in our sub-discussion, as you're wanting Google to somehow shut down the drone program and I want Google to help reduce their false positive rate to zero.

As you've said, neither of us (nor Google) has the power to shut down the program, so I'd rather at least try and support doing good by reducing the deaths of the innocent until some sort of miraculous shift in US politics can result in a more permanent shuttering of the program. And I don't see that happening as even Bernie Sanders supports the drone program [2]. It's here to stay whether people like it or not, so why not focus on making it more accurate.

[1] https://www.npr.org/2012/05/01/151778804/john-brennan-delive...

[2] https://www.theguardian.com/us-news/2015/oct/11/bernie-sande...


If the drone program was somehow 100% precise and it only affected people the US government deemed as "bad people", would you still be against it?

Yes! Obviously! Goodness.

Look, civilised societies have many, many checks and balances to prevent government officials arbitrarily killing people they happen to dislike. This extends to forbidding the death penalty in all the countries where I've lived. This is very important, critical in fact.

The USA routinely classifies people as "bad" without any idea of who they are, what exactly they've done (if anything) and certainly without ever thinking about why those people might have gone "bad". And it never can whilst there are people whose salary is linked to killing people.

The basic disconnect here appears to be your belief that there is a finite supply of terrorists/bad people/whatever, generated via some natural process, and the USA needs to be able to take them out with lethal force and no trials, in foreign countries. If it spends enough time, money and skill on this problem it will eventually be done, and thus the goal is to avoid killing "not bad" people whilst ensuring the "bad" people are taken out.

Whereas my view, and the view of most people in the world according to opinion polls, is that the USA arbitrarily reclassifies people as "bad" in order to keep the drone strike programme filled with targets. The more budget the various drone controlling agencies have, the more targets there will be. The supply of "bad" people is therefore infinite and drone strikes will never end, until the day the budgets for them are zeroed out.


> So I guess my question to you is: If the drone program was somehow 100% precise and it only affected people the US government deemed as "bad people", would you still be against it?

This question is nonsense. We already have a system for determining who is "bad people" and doling out punishment in a fair way. It's called the criminal justice system. The drone program can't ever be "100% precise" because it's acting as the "executioner" of the "judge, jury, and executioner" that is US foreign policy. Even if it killed only the intended target, the targets are chosen through extrajudicial means.

If you want to counter that the US is "at war" with al-Qaida, I would ask: when does this "war" end? What are the victory conditions? Many (most?) of the people who were in al-Qaida in 2001 are already dead - yet the drone strikes continue.


I also use Google stuff, to an extent, but I never fooled myself that corporations are all about profit and their management board goals.

Those that assign human behavig or ideas of better good to multinationals are fooling themselves.


Google is very much on the verge of evil, but isn't there quite yet. Meaning that, as a daily paying user I'm at the stage of cognitive dissonance, but not quite at the point of walking.

But over the past few days I have given myself a long-term life-goal:

One day I will delete my Google Account.

I will try and make all decision in accordance with this. I already don't have a Google (or Apple) phone, I use my own domain name for email. But there are so many tiny little ways you could lock yourself in (e.g. social auth into a valuable website).

It's a big step to pull the plug and expunge Google from your life. But avoiding taking steps in the wrong direction at least buys you some time.


facing needing to argue with a person trained intensively from a early age to ground all conjecture in fact and human cost of mistakes, ie a general, I will gladly face that situation over aby necessary argument with a politician, even a more sensible type like as lawmaker. our world problems ate created by distortion of reality that no soldier would indulge or permit, none i ever knew at any rate


>We put our faith into them, in the hopes that they would honor their creed, that they would do no evil.

Working for the US military is not doing evil.


It is if they are doing evil things, which they have done, a lot.


Not to blame you but :

>>> The public has put their trust into Google for 20 years.

who's fault is it ?


Lol, this can be fun-

Attempt 1) googles fault! They made an awesome product! (and even promised at the start that they “won’t be evil”.)

Attempt 2) the govts fault! They didn’t stop these firms from cornering the whole market! I don’t care that the govt isn’t prescient!

Attempt 3) my fault! I was not able to withstand or educate myself to deal with the massive amount of technological manipulation and awesome product design thrown my way!

I think blame is a non workable concept here.


Boy you've bought into the Silicon Valley delusion fully haven't you. Lol, so many people on HN are shockingly ignorant of anything outside of their Uber-Liberal SWE bubble.


The US is directly and indirectly responsible for more civilian death than any other coutry since the cold war ended.

Basic human decency should be the answer to why you shouldn't help kill even more.


I think you should read some stats on Syria and get back to us.


I think you should read some stats on Korea or Vietnam and get back to me


Neither of those conflicts happened after the end of the Cold War.


Yes all those cia funded freedom fighters fighting the fbi funded freedom fighters have nothing to do with the US.


I think that the leaked emails show that one or two execs within Google expressed the optimistic case for the eventual revenue leading from the project. That's how projects happen in companies. Some employee gets excited about the potential of the project and writes emails talking about all the huge numbers that project will post.

Most of those emails are wrong.

Maven may be wrong at any price or size. But Google's characterization of it as a minor project doesn't seem contradicted by these emails.


And why are such emails so optimistic and positive? Because of the bonuses that they might lead to. For creating killing machines...


If it was only meant to be a $9 millon project, why take the risk? Google aren't that badly cash strapped.


I don't know any of the details of Maven, but I've done a lot of federal work, and there are a few very good reasons to do insignificant projects, mainly based around past performance.

In order to do any work for the government, you have to show that you have have done the work before to a satisfactory standard, and that you've done it for a type of agency similar to the one requesting proposals for the work.

So, while a given project might be nearly worthless, you do it so that you can say that you did it, so that you're eligible to do it again in the future. The side, but still major benefit of this is that it excludes would-be competitors from getting that qualification, adding barriers to them for getting similar work like this in the future.

Smaller companies, even companies that do exactly, or even only a given thing, often find it hard to get work doing their chosen thing because they don't have these past performances. To get their first qualified performance, they typically have to partner up with a larger company that has qualifications for work at the agency (think Boeing, Raytheon, General Dynamics, etc.), and do small bits of work as a sub-contractor for them until you have enough documented performance to do the work on your own.

Whether or not these particular military contracts are worth much, it opens the door for a larger variety of contracts, which can be worth tens of billions per year.


If the prior contract reputation is the primary concern, couldn't google get good reviews from the feds on altruistic projects? The less important the contract amount, the easier it should be to choose an altruistic project (they tend to pay less).


Not for military contracts.


The US military spends a huge amount of money on "kind" projects -- e.g., not bombs and planes. The USS Mercy[0] is a multi-generational series of ships that go to help natural disaster victims, all over the world. The entire military spends on vaccines, supplies, and all kinds of good things for many people. Even organizing philanthropic events would be an example of this.

My larger point is that you wouldn't have to start working with them on drones unless you wanted to get in good with their drone funding teams, who appear, by and large, to be dedicated to killing people remotely. [my opinion there]

[0] https://en.wikipedia.org/wiki/USS_Mercy


Because the military contracts are really lucrative. Many are worth billions on their face. Performing well on a 9 million dollar contract opens the door to larger contracts.


Isn't this, in itself, a huge problem? Shouldn't public service be a low margin project? Could the fact that military contracts are so needlessly "lucrative" (despite the government having a literal monopoly on who you can sell advanced weaponry to) be why such a huge margin of your taxes go to defense?


Eh, government contracts come with all sorts of red tape that make that "lucrative" contract not so good once all the hidden costs are considered.


Yeah, there is red tape, but government contractors make this a core part of their business, so the relative risk is low.

For a new company, the risk is relatively high, because there's a lot of up front work that has to be done (auditing, compliance, security, etc). But once that is ironed out, larger contracts become easier and easier.

And then the only real risk is on a "firm fixed price" style of contract where the contractor assumes all the risk of product delivery. But many other are "time and materials" where the risk is shifted onto the government. If you fail to deliver a product, you're not on the hook because the government was paying for "time and materials" not the end product itself.

So yes, google may actually lose money on a $9MM contract, but probably not on the next $100MM contract.


Oh, military contracts always wrap up on time at the projected cost :)


Military, Federal, State govt. and even corporations all have dreaded over runs. It's bad all around.


Dear young, super smart Googlers: If your knowledge of the Middle East and terrorism goes back to ISIS/Al Qaeda - stop coding (or selling ads) and visit your local library. The history section is chalk full of wonderfully informative books that will inform your opinions.

You might find that Internet tropes about terrorism to be less than accurate.

For example, I just finished "Rise and Kill First" [1] a rather enlightening book about Israel's use of targeted assassinations going back almost 100 years. When I started reading Israel wasn't in the news, but then the whole Iran/US nuclear deal fiasco hit, and it made Netanyahu's actions all the more clear. Note, I'm not saying justified but rather that I now understand why Israel thinks and acts the way it does, and how sometimes this clashes with their normally close allies like the US.

1: https://www.amazon.com/Rise-Kill-First-Targeted-Assassinatio...

Before that, I read "Ghost Wars" [2] another beast of a book that covers US involvement in the Middle East going back to the late 1970's up to 9/11. Fascinating book about the struggle to dominate Afghanistan by regional and world powers.

[2] https://www.amazon.com/Ghost-Wars-Afghanistan-Invasion-Septe...

Before that, it was "The Brothers: John Foster Dulles, Allen Dulles, and Their Secret World War" [3] about the history of the CIA and the US's foreign policy in the last 100+ years.

[3] https://www.amazon.com/Brothers-Foster-Dulles-Allen-Secret/d...

Key point - These books look past recent events to provide an mostly complete understanding of these conflicts that you can't get with 20 minute Google searches and Wikipedia snippets.


Well beyond that, Military and Global Geopolitical research is a massive discipline that is studied most intensely by the Military itself.

Just like most disciplines, it's not possible to get even a basic understanding of this issue with only a year or two of study.


The irony of this particular piece is that they are publishing emails expressing concern about how the media might frame and distort such an item and doing just that.

The email discussion is mainly about potential revenue if they were to win the JEDI contract it's not about maven or 'expecting the drone ai work to grow'.

Also worth noting that Google isn't the only contractor for maven yet they are made as such in most articles.

As for the issue at large: Microsoft and Amazon have recently surpassed Google in market value and that is largely due to cloud, they also manage to not have their employees leak internal material to the press and protest publicly, and I think Google should look into achieving that.


Google probably can't change that, at least not overnight.

The reason Google are experiencing leaks and taking flak for this is that it's only Google that ever claimed to be a superior moral actor. Amazon and Microsoft, for all their faults, never claimed to be much more than tech companies that want to make money - the extent of Microsoft's moral ambition was "computers are good, let's put a computer in every home", a goal which is hard to argue with given the attendant positive consequences of computing for society.

Google, on the other hand, has frequently made the following claims:

• It won't "be evil", whatever that means.

• It has an ethics board, for AI specifically.

• That it withdrew from the Chinese search market due to being unwilling to go along anymore with the government's censorship.

• That it "believes in the empowering and democratizing effect of putting information in the hands of everyone, everywhere"

• From its code of conduct, that "everything we do in connection with our work at Google will be, and should be, measured against the highest possible standards of ethical business conduct"

And I would say in general, having once been a part of it, that parts of Google's workforce are quite convinced of their own moral superiority. Look at the relentless feminist virtue circus around the firing of Damore, a man whose crime was merely to point out that men and women have different interests and that's why so many jobs have unbalanced ratios. The company's top management went on TV and said they were convinced firing him was the "right" thing to do.

Well you can't do all those things and then get upset when employees start leaking behaviour they feel to be unethical. Live by the sword, die by the sword.

Summary: Google gets shit for working on the drone strike program for money, because (a) it has more money than it knows what to do with already and (b) it has constantly claimed to be a new kind of tech company that is ethically superior to the previous generation. Other firms lack either (a) or (b) or both.


Also, Google has reaped the reward of being a superior moral actor for a long time.

There was a good decade where google was the coolest place to work in the country. Many of the smartest people channeled not only their time but their passion into propelling the idea of google forward based on that understanding.

Some of the brightest minds happen to be idealists and visionaries (Gates, Musk, Wales) who want to work toward a post-war future where information is abundant and there is no scarcity. Putting power into the hands of the few seems like a step in the wrong direction to us idealists.


> to point out that men and women have different interests

I was nodding until this. Even if you agree with Damore, you can't pretend his argument was "men and women simply have different interests."


I've read the memo and that was in fact his argument. What do you think his argument was?


I've read it many times, and I suggest you read it closer. Try applying that same skeptic mindset used to defend him to the opposite perspective.


Google just removed 'don't be evil' from their Code of Conduct. The timing around the drone news is interesting. I wonder if this was a deliberate PR move (seems odd) or if this was an unauthorized protest action from an insider who disagrees with developing weapon system.

https://gizmodo.com/google-removes-nearly-all-mentions-of-do...


One of the contractual requirements for Google's acquisition of Deep Mind was that their technology will never be used for military purposes. I wonder if Google did this behind their back or Deep Mind is fully aware and is doing this willingly?

"One constraint we do have— that wasn’t part of a committee but part of the acquisition terms—is that no technology coming out of Deep Mind will be used for military or intelligence purposes." -Demis Hassabis


Feels good but is useless.

Google has its own AI research operation that's not the same as DeepMind. But even if it didn't, all they'd have to do is wait for some third party academics to publish a paper that improved slightly on a DeepMind paper, and then they could use it and say the "technology" didn't come from DeepMind. After all, where does tech come from? Standing on the shoulders of giants and all that.


Is my moral compass off, or does war profiteering completely violate the whole "don't be evil" thing?


When you are dying perhaps your definitions change.

I hold the controversial (and often dismissed) opinion that Google is dying. I assert they are dying in part because they built a company around search advertising before they knew how to build a company, their initial success gave them the confidence and agency to create structural impediments to their ability to do anything but search advertising. And the search advertising market is not only saturated it is getting more competitive as the easy gains to be made there have already been milked for what they are worth.

I support that opinion on the meager evidence of how they more and more cravenly monetize properties that they previously kept free of such things, their repeated failed attempts at building any other business outside of search that brings anything close to the margins they need for their business model to work, and the lack of 'barriers' to other well funded and durable companies from taking away everything that is special from them. (Bing, Uber, Cruise, Facebook, Apple, Etc.) They get less and less per click and they spend more and more to buy more traffic for their search boxes as documented in their financial statements for the last decade.

They have repeatedly come under attack from outside agencies which see their use of, and exploitation of data about people as either anti-competitive or a violation of basic human rights. Those efforts have resulted in fines and legislation that has been cutting off other ways to monetize the traffic of their place in search.

From that misguided perspective, military contracts make total sense. What they have is a tremendous infrastructure that can be wielded by a relatively small number of engineers to do amazing things. For now, that capability is unique. By turning it from a cost center into a profit center (or even a less cost center) it can help them make the profits they need to keep things rolling. When I was working there I got to see what spending 4 billion dollars a year on infrastructure buys you. It rivals what nation states can field, it rivals what the US government can field. Renting that out to a price insensitive buyer for a unique asset? Sounds like saving yourself from dying to me.


Define dying ?

I don't see how Google can go out of business in the next 10 years given their cash reserves. Eg: If they lost 2 billion dollars every year they could still keep doing that for decades.

I'm pretty confident that they'll still be around in some form 20 years from now as well.


I define dying as being on a path toward non-existence. And I agree that given their cash it can be a very long very slow path. Sort of like Xerox or IBM or even Yahoo!. And just because there may be panic in the executive cubes because none of their strategies have worked so far, they may in fact find something to reboot them, wiser and stronger than before. Microsoft seems to have pulled it off and all it required was replacing the senior management and tossing out everything the company originally held sacred (like the Windows franchise). IBM has done it in the past but are struggling in this round.

I think the cash is an interesting challenge for its own reason though. Historically, public companies that held large fungible reserves like cash, were vulnerable to so called "Wall Street Raiders." These are people who would do the math and figure out that the assets of the company were worth more then the shares would indicate, they would buy up a controlling interest and then take apart the company, paying themselves a tidy profit. Traditionally that can't work on Google because the voting power of the class A stock is so much higher than the class B stock, however if the stock price were to crash, it becomes possible to make a play for all of that cash. The math is convoluted, and the founders need to continue their current steady sales of stock (which becomes class B when they sell it) but that flip happens before they run out of cash. I wonder sometimes if the IBM buyback of its own stock has been a plan to prevent this from happening there.

But timing the demise is secondary to the awareness on the part of the senior management that they can see their demise as a real and tangible thing. That awareness leads to stress and a willingness to do things counter to their established history in order to facilitate that survival and mitigating that threat to their existence.

The original question was "Did their moral compass shift?" and my observation is that the fear of death is a large metallic object which, when it gets too close to ones moral compass, deflects the needle in perhaps unexpected ways.

That Google's fear of dying is growing and their moral compass is being subverted in the name of survival is a hypothesis for a theory that explains their current behavior. I believe that the theory is both testable and falsifiable given information that we'll get going forward. As any good theory should be predictive, this theory predicts that as they fail to find a successful new business to supplant search advertising the ways in which they attempt to make money will become more and more disconnected from what their corporate culture claims to value. Further, if they do find a successful side business, their corporate cultural claims will re-emerge and they will once again make choices based on their cultural aspirations rather than for the sake of additional revenue.

If they continue to do more and more evil things, even after their future survival is no longer in doubt, it will show that my theory is completely wrong, they were simply evil to begin with and dying or not dying had nothing to do with the public shift in their willingness to be evil.


When you say 'dying' are you referring to civilians or soldiers? No civilian supports killing by drone in their home town. Drone targeting is often indiscriminate and always unaccountable. Because drone kills are anonymous to anyone but the killer, it allows soldiers to act invisibly by remote control. And now we want to give that responsibility to robots? That'd be a truly terrible precedent for future warfare.

Would you be OK with your local police enforcing the law at 1000 feet? Using a rocket launcher? That's what we're asking the people of Afghanistan and Iraq to do.

No company that actually believes in the maxim "Don't be evil" would participate in such a technology, much less promote it, or worst of all, lead it.


Drone targeting is often indiscriminate and always unaccountable.

This is just wrong and not helpful to the discussion. Have you read (or even seen thanks to YT) anything about how "drone targeting" by US forces is actually accomplished besides media outrage when a bomb misses? Mistakes are made, but targeting is far, far from indiscriminate. And being "unaccountable" is not exclusive to drone warfare.

"Would you be OK with your local police enforcing the law at 1000 feet? Using a rocket launcher? That's what we're asking the people of Afghanistan and Iraq to do."

If my local criminal network detonated a bomb at my local coffee shop, killing 50, 100, 200 of my friends and family members, and then proceeded to do so every week, yes - I'd encourage rocket launchers from 1000 feet, 10,000 feet or 10 feet if that was the most effective way to stop them.


You don't need to kill dozens of bystanders to misuse drone technology or to do it unaccountably.

As I understand it, the call for a drone kill of an insurgent target is made by the JSOC ...or... by a CIA officer in Langley. In the latter, the case for discriminate sufficiency is entirely up to the CIA and its current set of political priorities for that target, which is NOT shared with the drone team or their chain of command, nor with any non-CIA entity.

I've seen no mention that the CIA is NO LONGER killing with drones (which became one of the established practices for drone use in Afghanistan about 5 years ago, IIRC). This program is very much one that Google would serve in targeting drones, which is why I mention it.

No doubt the CIA would claim their reasoning behind each kill order was discriminate. But if my description of SOP is correct and the CIA is still in charge of many drone kills, that makes them unaccountable in any practical sense.

For some background on CIA's targeting of drone strikes:

"Transferring CIA Drone Strikes to the Pentagon" https://www.cfr.org/report/transferring-cia-drone-strikes-pe...


I guess that depends on whether you view either profit or war as intrinsically evil.


It's whether one views the intersection of profit and war as intrinsically evil, which for most people is a resounding yes. Humans are already good enough at killing and causing misery for each other without adding additional economic incentives into the mix.


But what if the US military wants to minimize civilian casualties and collateral damage as much as possible and wants technology developed to do so?


The obvious way to reach the goal of minimize civilian casualities is to abstain from risky strikes and use smaller yield weapons.

The us military wants technology that justifies them not making more conservative choices, and I don’t see any moral justification for that. Drones are useful for cheaply eliminating risk to one actor, thus allowing more risky behaviour to the side that has them.

But being a cheap risk mitigator is also all they are, they bring no new strategic capability, as evidenced by how easily their use was ramped up and down, so their main effect, the one that could provide some moral justification, is making war cheaper.


Unfortunately I'm not aware that minimization of civilian casualties is a US military objective.

Until I hear that it is, I will assume it is not, since it's much easier to achieve your mission of killing bad guys if you're also willing to kill good guys. You get more rewards for killing more bad guys, not saving more good guys.

Unfortunately that's how militaries and governments work.


Unfortunately I'm not aware that minimization of civilian casualties is a US military objective.

Not to put too fine of a point on it but even a basic search on the topic would yield a ton of official policy doing exactly that.

There is explicit guidance and long history of minimizing civilian casualties as a goal within the DoD, and such guidance is codified in multiple legal documents under the Uniform Code of Military Justice, Rules of Engagement and international treaties.

To wit: https://obamawhitehouse.archives.gov/the-press-office/2016/0...

https://www.globalsecurity.org/military/library/policy/army/...

This is also taught as a basic tenet in introductory courses for new Military Officers. One example here:

https://www.trngcmd.marines.mil/Portals/207/Docs/TBS/B130936...


A large set of such references are to be expected, of course. When a civilized people undertake killing on a large scale, you don't want others to think you act without justification. So you write serious administrative documents claiming your government actions to be justified and your military claims to minimize collateral damage.

But are such claims really true? Does the part of the military chain of command that actually does the killing take those claims to heart, as much as pentagon spokesmen do? Do the superiors of lieutenants and privates hold them to that standard, even when the laws of conduct become unenforceable by collateral victims, as in drone strikes?

In several US wars, such claim were not taken to heart. Vietnam became infamous in US history as lacking a sound casus belli, just as the 2003 Iraq War has. In messy wars like these, one of the principal casualties is often the enforcement of the rules of war, especially when it comes to laws of conduct that are still in the course of being written, such as the proper use of drones or AI.

This lack of enforcement becomes especially fraught when novel weaponry and its use is governed not by the military under its code of conduct, but by the CIA (who now runs most drone snipery), which AFAIK lacks an equivalent martial code of conduct.


Does the part of the military chain of command that actually does the killing take those claims to heart, as much as pentagon spokesmen do?

Again, you can research this for yourself and find out that yes, it's deeply embedded into the culture. This is seen the most clear when military members who break the rules and are reprimanded, discharged or jailed. You can move the poles all you like, but the data exists.


We don't know that Google will accomplish those goals.


Well I think that's what the Air Force wants to find out.


But you’re describing some commercial inventive to go to war, which is very different from selling goods and services to the military.


those 2 things are pretty heavily correlated. guns and ammo, military rations manufacturers, and yes creators of technology that support military operations have a huge economic incentive for war

the more war you create, the larger the military, and get larger their ability to spent on “protection” and “defence”

*edit: which is certainly offset pretty substantially by the hit to the countries GDP, because yknow... war is expensive!


There is no logic to your argument here at all. You’re saying that every company that supplies anything to the military is automatically wielding an unethical influence to create war on false pretences.

I think you can certainly find examples where commercial entities have played a suspicious role in that respect, but to say that simply by supplying goods or services to the military you are war mongering is completely devoid of logic.


He is saying that any company that supplies to a military has an economic incentive to make sure war happens. Seeing as they profit from the size of the military, that's not exactly far-fetched.


He’s making an enormous and baseless jump that any company that supplies the military must also be involved corruption and unethical war mongering. Just because an economic insentive exists, doesn’t mean people will exploit it. That’s just as senseless as saying that I’m probably a drug dealer because there’s an economic insentive for me to be. Aside from the complete lack of logic in this claim, it also doesn’t even make sense, since the military gets a budget every year, regardless of whether there’s a war to fight.


A company has only one feedback point in the long run (people leave or die over that timeframe, so you can only count on the lowest common denominator there) and that is money. If that doesn't tell you something about a company's core motivation, then I'm afraid I can't make clear to you why counting on anything but economic incentives as motivation for a company's actions is naive at best.


You’re argument is that every company _must_ be involved in corruption and law breaking, simply because there is profit to be found in doing so. This isn’t simply naive, it has no basis in reality whatsoever.


I have literally never encountered a large company that doesn't have good tax deals in some country or other. Good in the order of "pays basically no taxes". What do you want to call that? "Just good business"?


> Just because an economic insentive exists, doesn’t mean people will exploit it.

I think that this statement has been proven wrong countless times throughout history.


History has proven that every company in the world will always exploit every possible opportunity to make a profit now matter how corrupt or illegal it is? I don’t think history has proven that at all, and I don’t think you actually believe that.


War will always be evil, it's usually just about whether it is a necessary evil.


I think that’s a very oversimplified opinion. Depending on the alternative, war can certainly be the moral decision. I don’t think you’d find many would who would describe The Unions war on slavery as Evil.


The Union didn't fight a war on slavery (in fact, it included slave states). The Confederacy fought for slavery, the Union fought for, well, union.


It wasn't a war on Slavery though, as Lincoln only took the extraordinary step of releasing the Emancipation Proclamation late in the war.


That’s a semantic argument. The point is that any of us could come up with an example where the decision to go to war, either offensively or defensively, is clearly not evil.


Yes, it was. South Carolina was the first state to secede; South Carolina cited slavery in their declaration of secession; the war began with secession or immediately following and because of secession. The civil war was fought over slavery.


Was IBM's providing of tech to the Nazis to kill Jewish people more effectively neutral or intrinsically evil?

I try to avoid comparisons to the Nazis (even though many more people seem to be doing it these days with Trump in power), but isn't Google basically providing the government the technology to more effectively kill people in the Middle East, too? That's what it is at the end of the day, isn't it?

History will remember Google for this - especially when its technology will be used for some horrifying stuff in the future (and it will be). Right now we haven't even seen its true consequences yet, so it's easy for some to dismiss it as no big deal.


This isn’t a well reasoned position at all. You can’t summarily declare that all wars or actions taken by the military are morally equivalent to an ethnoreligios genocide.


Is there another use for these drones right now? Because all I've seen is pretty close to "ethnoreligious genocide". Many people straightforwardly advocate for it.

Can you really pretend that you're developing "neutral" weapons technology in this climate?


Appealing to mindless outrage is not a rational argument. Please list all of the genocides that the USA’s drone program has participated in.

If anything, it seems this google technology would reduce the number of civilian casualties.


[deleted]


> the world has been a safer place over the last 70 years because there's a global cop who's willing to use force to keep the peace.

No, the world has become a safer place for us because we live in nations with nuclear weapons, and they're a pretty good deterrent for attacks by other nations. The world is still a very unsafe place for many people, partly because of this self-proclaimed 'cop'. If you think the US has this military might for any other reason than ensuring it's own political and economical wellbeing, I'd like to know what it is. It sure isn't to "keep the peace" - they're selling a bit too many weapons to questionable figures for that.


> No, the world has become a safer place for us

it's become safer for everyone:

https://assets.weforum.org/wp-content/uploads/2015/06/ourwor...

which obviously isn't to say that nobody dies in wars anymore.


No, your moral compass is off. The US has toppled a large number of sovereign states that were not acting violently toward other nations, and installed dictators and monarchs. The US supports Saudi Arabia, one of the most authoritarian nations on earth, who backed an actual attack on US soil, and yet they invade nations with false claims of WMD and kill hundreds of thousands. To claim that the US is the police and not the thugs is to completely ignore reality.


>To claim that the US is the police and not the thugs is to completely ignore reality.

Or maybe the US just had a problematic idea of what police should be doing, which explains a lot of our other problems.


The agenda is so transparent that it can't be mistaken for incompetence.


The US has a long tradition of tacitly supporting genocide, slaughter of civilians, collective punishment, torture and assassination or other extrajudicial murder of journalists and dissidents and labor leaders, mass surveillance, etc. etc. in places where “our dictator” happened to be in charge.

More directly, the US has directly toppled many democratically elected governments, undermined sovereign political processes via propaganda and support for anti-democratic political groups and targeted economic pressure, supported American and multinational corporations in their unethical and often criminal behavior around the world, given money and weapons and training to armed opponents of various democratic governments (who then used those for slaughtering civilians) ...

Not to mention being one of the world’s foremost producers and distributors of small arms (that end up in the hands of local gangsters, child soldiers, ...).

As a particularly tragic example, look to the case of Guatemala. The democratically elected president of Guatemala demanded that the United Fruit Company – who owned an incredibly high proportion of all arable land in the (starving) country – sell back some portion of the land which they were not currently using at the price it had been assessed to be worth for tax purposes. Unfortunately for Guatemala, the Dulles brothers (CIA director and Secretary of State), as well as various other members of the Eisenhower administration, had a direct business interest in United Fruit. The US toppled the government, installed a friendly dictator, and kicked off 2 generations of civil war in which a shockingly high proportion (5% or something?) of all Guatemalans were killed, including the mass-murder of many villages by thugs with machetes.

It is true that the presence of nuclear weapons has prevented for the past ~70 years the break-out of another “hot” war directly between nuclear-armed states. On the other hand, if someone slips up when nuclear weapons are involved, we’ll all be screwed.



I disagree that the world has been made safer by the US playing world police for the past 70 years. I could list examples from Central and South America, the Middle East, Africa, southeast Asia, etc. but you probably understand where I'm going.

The US should not be out droning people in other countries except in explicit self defense ("they might have been going to attack us" doesn't count). That is my opinion, you are free to disagree with it. If "the goal is to kill fewer people," the best way to do that is to... not kill people.

Also, the fact that representatives I voted for voted in favor of extrajudicial killings of people in other countries is not something I'm happy about. I wish every day that the US could abolish its first past the post election system so that I could vote for candidates I actually agree with rather than being forced to vote for someone I sort of agree with.


> because there's a global cop who's willing to use force to keep the peace

Are you talking about UN? If not when was the last time the US took the initiative of a peace-keeping mission?

Arguably it would be more than 25 years ago, in the Balkans and even there, it can be argued that this was under the auspices of the UN.

In the 70 years you mention, US has often been the lesser evil, but especially since USSR's fall, it is not clear at all that it is a force for good. Its last strong engagement, in Iraq, was a totally evil and misguided operation that made the world less stable.

Its refusal to help in Syria before ISIS took hold there is another blow. The crass incompetence of the Bush administration on both Iran and North Korea, and the apathy of the Obama administration on NK, led to a less safe world.

US' recent interventions are sources of instability and now US' policies seem to be guided from Moscow and align more often with the views of dictatorships than that of democracies.

In such a context, wondering about the ethics of improving US army's capabilities makes a lot of sense.


> Are you talking about UN?

Clearly they're talking about Team America: World Police.


Your comment wobbles and stumbles into a large number of complicated and nuanced subject areas with the grace of a headless bird:

war vs. war profiteering

the notion of voting as a proxy for consent

anything is worth it

bad people in the world, and they aren't here. they're there.

kinetic energy? targets? condescendence?


If you're trying to make a point, it is entirely lost in the rambling.


Read The Dictator's Handbook. Then come back and tell me if I'm rambling.

https://www.amazon.com/Dictators-Handbook-Behavior-Almost-Po...


Now, I'll start by saying i haven't read the book. The description says all leaders are supremely focused on maintaining power. Given that, how can you possibly argue that the US killing anyone it considers a "bad person" without jurisprudence is a moral act?


That highlights the cognitive dissonance in his post:

>There are a lot of bad people in the world, and quite a fraction of countries are ruled by them. Thugs respond to power.

The US is still a good country, with a good government, but it's ridiculous to assume it immune to the same threats of tyranny as every other nation.


Thanks for the link. I'm sure we all might find more coherence in the barcode on the back than was present within the entirety of your wandering treatise.


The rest of the world vehemently disagrees. Which should be more than enough to know it's not good.


Argumentum ad populum

Ignoring whether or not the world actually does, or how vehemently they might disagree, the rightness of a thing isn't measured at all by how many people think the thing is right.


The rest of the world actually doesn't disagree. Just look at what countries supported and participated in US conflicts. And there are countries who don't support openly, but behind the scenes with intelligence and logistics.


Because the US also happens to dominate the global economy for now. Soldiers from my country went to help the US in Iraq and Afghanistan too, and we all know why - because the country needs to suck up to US to strengthen its position relative to the neighbors. Moral reasoning didn't even enter the picture.


What peace are you talking about? The one in Syria, which has led to mass migration?! The middle-east is not a safer place. Actually battle related deaths have risen in the last years. https://ourworldindata.org/war-and-peace

Your superficial statements about ‚bad guys‘ and ‚thugs‘ that only respond to ‚power‘ led me to believe you‘ve watcht one to many movies. Destabilizing countries doesn‘t bring peace to the world.


Do you think anyone in a position of political power truly thinks of themselves as a "thug"? I imagine, in their own ways, they think they are doing a "just" and noble deed. Much like the way you characterize America's perpetual war machine, that has destabilized the world in direct ways that supposedly require us to go BACK and "steer them...in the direction our fellow citizens desire." See : Iran, ISIS, several Latin American countries, etc.


The corrupt global cop that is responsible for most of the harsh environments that spawn extremists?

Edit: adding stuff.

IMO, most anti-progressive views spawn from anti-USA views.

Isn't the US responsible for a lot of the instability in the middle-east and South America?

As I type this I have this vaguely plausible fear that someone in the NSA is flagging my general sentiment and incorrectly labeling me for something or the other even though I'm mostly a pacifist.


You are talking besides the point. War != "War Profiteering". You can believe war is unavoidable and still find excess profiting from war morally repugnant


The company with the single best resources to execute on this mission is trying to help make sure the kinetic energy doesn't get off target. How is that evil, exactly?


It is evil when the target choice is immoral. Simple as that.


By putting it on the target, duh. If I shoot you I don't get hailed as a hero for saving your family's lives and neither does the gun sight manufacturer. Especially since they did it for the money.


Imagine actually believing this.


This is someone responding on Hackernews too. Every time I go to the US on business I see a different world. Adult bible study groups talking in coffee shops about how they are praying that, and believe that Trump will save america, etc.

People can think what they want I guess. It just has more of an effect on the world than I thought it used to so I feel a little saddened watching this new dark age come about.


> the world has been a safer place over the last 70 years because there's a global cop who's willing to use force to keep the peace.

for god's sake. there are really people thinking this way.


https://gizmodo.com/google-removes-nearly-all-mentions-of-do...

Google is allowing their employees to be evil now


that is why they removed it from Google’s Code of Conduct


It wasn’t removed. It was moved towards the end. You’re welcome to have an opinion on that if you want, but let’s be accurate and honest about it and stop spreading the incorrect claim it was removed.


Let's be accurate, then. It's disingenuous to simply say it was moved towards the end.

It was always at the end but until approximately a month ago, Google actually led its Code of Conduct with "don't be evil" in the very first sentence. That was removed. Then again "don't be evil" was in the second paragraph. That was removed.

Sources:

April 21, 2018:

https://web.archive.org/web/20180421105327/https://abc.xyz/i...

May 4, 2018:

https://web.archive.org/web/20180504211806/https://abc.xyz/i...


They removed that clause recently, I believe.


"Google Removes 'Don't Be Evil' Clause From Its Code of Conduct": https://gizmodo.com/google-removes-nearly-all-mentions-of-do...

"“Don’t be evil” has been part of the company’s corporate code of conduct since 2000. When Google was reorganized under a new parent company, Alphabet, in 2015, Alphabet assumed a slightly adjusted version of the motto, “do the right thing.” However, Google retained its original “don’t be evil” language until [late April or early May 2018]. [...] The updated version of Google’s code of conduct still retains one reference to the company’s unofficial motto—the final line of the document is still: “And remember… don’t be evil, and if you see something that you think isn’t right – speak up!”"


That last sentence directly contradicts the title...


It doesn't. The code of conduct used to start and center around the Don't be evil mantra. Now it seems to be in there just one last time before they completely eliminate, and mentioned in passing at the very end of a 3-page document:

https://web.archive.org/web/20180421105327/https://abc.xyz/i...


Good thing they removed all mentions of that from their workers' code of conduct then, no? They really dodged a bullet there (get it?)

https://www.digitaltrends.com/computing/google-dont-be-evil/

Moral compass out of sight, out of mind, is probably what the leadership must have been thinking, especially after the recent protest against Maven from a few thousand employees (which seems to have resulted only in the leadership trying to appease them by promising some secret bullshit "ethics rules")


Except they didn't, it's still in the code of conduct ;)


It used to be a central piece. Now it barely gets a "Oh, and don't be evil guys, k?" mention in the very last line.


"removed all mention" is entirely false though.

And while I don't particularly care for this change, calling the concluding sentence of a document "a passing mention" feels forced. I would agree it's been deemphasized and should be mentioned more, but a concluding sentence is pretty central to a document. It's meant to sum up the entire document.


It isn't wrong of Google to work their country's military. If not Google, some other company will take those contracts.

They need to own up the responsibility for their decisions and of course be willing to put on with the media theme that "Google - the company that supplies AI tech to the military with possible usage in war". Denying that is what makes them look even fishier.

I already distrust the company for anything related to privacy. Just one more reason to hate them more.


Nothing wrong with IBM helping the Nazis count prisoners. If not them, some other company would have taken the contract

This absurdly cowardly thinking is how people justify atrocities for money


Treating companies as social personalities is just stupid. Google is a business, and it is there to make money (whilst not damaging itself too much that it goes out of existence). My point is, that Google as a company(their leadership) could choose to work with the military, and even openly build weapons for a country, as long as that motive is public, and bear the hate from public that they are a piece of shit company that whored out to the military.

But, cat calling a company on forums whilst there are other companies that openly build weapons is just useless in my opinion.


Essentially everything we do on forums is useless, I don’t know why calling a company immoral is any different.

All businesses exist to make money but if the pushback to a publicly perceived immoral act is great enough then they won’t do the immoral things. For example, I can say as a software developer that I would prefer not to work at google over my current employer because I don’t want to do immoral things. I do have a perceived moral outlook of the company’s leadership as a whole and that’s a decision that I can make. Not sure why you seem to think that’s stupid

Also at the end of the day, nobody is absolved of guilt from doing something just because it made them money. That’s possibly the dumbest way to justify doing something as good


You're exactly right. Working with the military is exactly the same as helping the Nazis.


The world runs on violence. It's the fundamental force that keeps things in order. Modern society with all its rules and laws is nothing without enforcement, which is ultimately delivered via a monopoly license granted to the government with its military and police to keep everyone else safe and free.

Being the best at deploying violence is how you defend against the chaos elsewhere. This isn't about "evil corporations" or misguided intentions, it's about human nature and how there will always be someone somewhere willing to cause harm. The only way to defeat them is by being better at it, but with the appropriate judgement to make sure its in proper defense.

AI has already been a part of warfare for decades and the value of anything other than being the best is rapidly diminishing because of the exponential advantage that modern technology produces. You either stay ahead or lose the war before it's ever begun. Given that choice, I would much rather have the strongest military with all available resources and talent rather than worry and hope the rest of the world stays friendly, until the day it isn't.


I agree.

I believe the only reason American civilians like us have been able to enjoy peace is because we dropped a ton of bombs to blast tiny countries out of the sky and we also have a history of fighting wars not to win, but not to lose. If Americans are willing to sacrifice their people and resources for a tiny country in the middle of nowhere, imagine what they would do to you if you attacked them or their interests.

I was watching a documentary about the Schwarzman Scholars, and Schwarzman said something like out of the 4 or 5 times where a superpower was challenged by a new rising superpower, it led to armed conflict 3 times. I haven't been able to find an article to back up this claim so would appreciate if anyone could point out if it's blatantly wrong or if there's a source. Either way, there might be a new superpower.


> I believe the only reason American civilians like us have been able to enjoy peace is because we dropped a ton of bombs to blast tiny countries out of the sky

This is completely ludicrous. There is no military threat to the US beyond nukes. Nuclear weapons have obsoleted wars between great powers. Tiny countries are zero threat to the USA. They cannot invade; they can do almost nothing. At best, they can kill a few hundred individuals at a time by sponsoring terrorism, far fewer people than are killed by vehicles in an average June.

The reason the USA starts wars in far away countries is to maintain a sphere of influence and protect a philosophy of government and running economies that keeps the USA affluent via trade. It is not, in no way, about defense of the USA; if you really think that, you've been deeply hoodwinked by some military warrior code bullshit. The USA's military program is foreign policy for, ultimately, economic reasons.


Thank you for putting this into words. It's sad to see how effective the US propaganda efforts have been in convincing otherwise intelligent people that the military is "defending our freedom".

I'd love to see how many millions (billions?) the federal government has pumped into this myth.

Hell, the entire America's Army initiative was literally a recruitment tool for kids who played video games. It's depressing to think about the number of kids who eventually paid a blood tax because a state-sponsored video game convinced them to join.


It's beyond naive to think that conflict doesn't exist or that tiny fluctuations cant kick-start global warfare. It is very easy to take it for granted when you have never experienced conflict yourself. Perhaps try talking to the families of those who paid in blood to give you the freedom to post these thoughts, and see exactly how many places like the US actually exist in the world.


I'd love to learn more about the history of American military and foreign policy if you have any recommendations.


You think war between major powers is obsolete? That's naive at best and completely ignores how volatile the world is. Tiny countries do not exist in a vacuum, there are layers of resources and allies and other political ties that can move entire regions into conflict. Also yes, small countries are developing nukes themselves and I fail to see why you just brush over that threat? Are nukes not real or somehow guaranteed to never be used?

AI, which is the topic here, changes that dynamic completely, and with drones and cyber warfare there is no need for world-ending firepower when you can just shutdown leadership and modern society instead.

And yes, I want the country that I'm a citizen of to stay both free, powerful and wealthy. The point is that we are on a path to building wartime capabilities that will put one country far ahead of everyone else, and I absolute want that to be my country. I would think you would want that for yours as well.


Almost completely true, bioweapons would actually be something certain tiny countries are really good at.

But many have actually signed treaties not to use them (I believe even to not develop certain kinds).

Actually the more advanced our technologies get the bigger the threads.


Any country found to have used bioweapons against the US would be in a similar position to one that used nuclear weapons.


I'd love to learn more about the history of American military and foreign policy if you have any recommendations.


> You either stay ahead or lose the war before it's ever begun.

We must be able to do better than that.

http://slatestarcodex.com/2014/07/30/meditations-on-moloch/


Yes and no. It's true, we ought to find ways to work together that rely on concepts like mutually assured stability instead of MAD.

And there's evidence of altruism's superiority even on a biological level. There are some species of bacteria that when they run out of food will form a protective spore to wait for better times or form a stalk to move to a new location to find food. Those whose bodies form the base of these structures are effectively sacrificed, even if they aren't intelligent enough to be aware of the act.

That said, it's simply undeniable that at the end of the day the force is the intrinsic and most rudimentary base of everything society has become.

Public parks, schools, social welfare programs, etc... All founded on a platform of violence and murder against anyone who challenges their authority.

To deny that is folly.


State monopoly for violence is itself a solution to a coordination problem.


I'm not aware of any other solution.


It's been a long time since I've read this, but I believe it is stating what I said in much more eloquent terms. I'm unsure what you mean by "do better than that"?


I mean that what you posed is a false dilemma: we must be able to solve our coordination problems or face extinction.


What is the false about that dilemma? That's exactly the situation. We may overcome it, but it's certainly possible and likely even probable that we'll just wipe ourselves out.


Bad phrasing on my part. I meant that the alternatives you presented—escalate an endless arms race or perish—cannot be the only ones. The point Meditations on Moloch ultimately makes is that we must be able to do better than the eternal race to the bottom that Moloch represents.


Maybe I'm wrong but based on my experience if you are a company who is depending on end customers then you need to have a good reputation and a very strong and positive brand.

Except your clients are totally dependent on your product with absolute zero alternative choice.

Most military suppliers don't depend on private end customers so they can be successful without worrying about public opinion.

I think Google is becoming more and more like companies Exxon Mobile or former giants like Malboro. People absolutely need their products so they can do whatever they want.


This is actually very interesting, being the first detailed info on this work that I have seen. First, the scale (dollars/FTEs) of the work (and proposed work) seems larger than in some earlier reporting. Bizdev people will have incentive to boost projections, but even the current numbers seem higher.

Second, the little fact that “...On October 27, 2017, a team from Google Cloud visited Beale Air Force Base...”. This indicates they were not just doing basic research, but they were trying to (eventually) field something operationally — not just develop basic technology and hand it off elsewhere for maturation. Also, the reciprocal visit of the uniformed general [2]. The presence of a general and entourage, probably also in uniform, makes a strong impression on civilian computer technologists.

In the DoD world, there is a big difference between basic research (TRL 1-3, say [1]) versus TRL 4-5 (component demos) versus TRL 6-8 (operational tests). Basic research might be funded by DARPA and is pretty abstract, typically publishable in the open literature.

As you work up toward operations, you start to deal directly with uniformed “warfighters” and realistic test environments. It’s a very different set of expectations. I think I understand better the reactions of many Googlers to the work described.

(Note: not passing personal judgement - clearly I’ve spent some time in this milieu — just observing that this work and its trappings is not what many Googlers I have known would go for.)

[1] https://en.m.wikipedia.org/wiki/Technology_readiness_level

[2] http://www.af.mil/About-Us/Biographies/Display/Article/10883...


That's exactly what I thought when the point was brought up that the contract between Google and the DoD "was only for $9,000,000."

That's called getting your foot in the door.


The military is going to deploy deep learning on their drones, with or without Google's help.

I'd rather have more people working on the problem than fewer. More scrutiny should, at least, improve the work and reduce errors in the models (and, hopefully, that leads to fewer casualties)


Now this world starting to feel more and more like a Gibson novel - while we had been promised it's going to be an Asimov one. I feel cheated somehow.


Why US citizens hate their army so much? All i know is that if google announced it's working with our army on some AI projects, the response would be overwhelmingly positive, and people would praise Google for that.

(I'm from Israel if you're wondering)

And I think US are a minority in that the citizens despise their own government and army this much. Which really makes you wonder what's the whole point of democracy if you trust your own elected officials less than Russians trust their corrupted monarch.


As a fellow Israeli, I believe Americans don't feel their military is the only barrier between them and an impending doom as Israelis do. For some Americans, their military is just an oppression tool fighting colored people halfway around the world.

Other than that, there's a distrust towards authorities which US was founded upon, which I consider healthy and beneficial. As much as it lends itself to conspiracy theories and a drive to destroy the system, it also provides checks and balances against the world's strongest superpower becoming a tyranny.


Do US citizens hate their army so much? The typical international stereotype for Americans is the opposite, especially with high respect for soldiers and veterans. Although I guess that's slightly different: respect for the individuals, but less respect for the overall organization that has them fighting "useless" wars. Little of it's fighting is fighting direct threats to the US mainland.

The US still has a culture around respecting the military that's quite foreign to many other countries.


I think your perception of US as being anti-military is mostly wrong. The vast majority of Americans love their military. The US wouldn't have doubled its military budget in the past decade if most people didn't support it enthusiastically. They wouldn't stand and cheer members of the military at most sporting events, or invite them to board civilian airplanes before anyone else, including the rich or disabled.

Americans often offer multiple dissenting opinions because our Constitution invites us to think independently and speak critically. This is a product of our being a breakaway republic separated by revolution which then defined itself and its government just as monarchies became obsolete.

Dissent and debate is in our blood.


well, why do Israelis love their army so much? I think to know the answer, but be aware Israel is a very special case for so many reasons.

Edit: and the contrary is true, Americans loves their army, and they are all about that "thank you for your service" rhetoric, so perhaps just Israel top that.


Most of Israelis serve in army that gives them an inside perspective of what is Army, how it functions and its goals. Americans see it from the mass media perspective only, which creates the same feelings in people like when non-Israelis see the news on Gaza fighting - very superficial and inducing very strong emotions. IMHO.


Indeed, that's could be an explanation.

Also Switzerland has conscription / compulsory draft, and among a lot of Swiss, the army is seen with sympathy.

That said, I met a couple of Israelis that left the country for good because their experience with the army was so bad. I guess that there is some "voting with legs" going on.


>very superficial and inducing very strong emotions. IMHO.

I think spending part of your young adult life participating in the actions and socialisation of an institution, and then living in a society where everyone around you shares that experience, creates far stronger emotional influence on people's judgements than seeing some people you don't know being shot in a far away country.


Because most people here are rather left-wing, and I feel that the majority of lefties hold the US military establishment (rightfully, IMHO) in very low regard, due to recent activities and engagements — e.g. the wars in Iraq and Afghanistan, the bombing of Libya and Syria, and so on.

Not to imply this is only a position of the political left; it's my experience, however, that left-wing folk, including myself, are not exactly overflowing with praise for the US military...


Some US citizens absolutely hate the US military.

Other citizens respect and value it.

Unfortunately, many of the former group are also dedicated liberals, and many of the latter group are also dedicated conservatives. Given the state of the culture war in the US today, its difficult to determine how much of any individual's opinion on Project Maven is driven by their opinions as an individual, and how much is driven by their allegiance to one their side in the culture war.

It doesn't help that the political right has made veneration of military service one of their core principles. The left's claim is that veneration of military service is used as a psychological wedge to lever support for military action as well, and they are probably right (for some fraction of the population).


Gee, if only Google was part of a larger umbrella organization they could have just transferred the project to a separate company and been done with it.


If none of us buy that nonsense, why would Google employees? Alphabet = Google. We all know that.


There must be some hard data. Looking back over the last 40+ years. Were research institutions that divested from defense funding during the Vietnam era adversely impacted. Perhaps short term. But on long scales there has been more than adequate consumer and private sector demand to compensate.

Politically, the company could face a backlash in the form of antitrust scrutiny over search monopoly. Which Alphabet seems to be anticipating as historical inevitability. With the foresight even that perhaps breaking up large trusts is great for startup innovation ;)

But it raises the general question. If given the option of taking $ to do basic science. Regardless of the funding entities motives. Do you do it anyway? An argument can be made to always take the money and do the science. You don't know what the world will look like in 10 years. Drone AI used for surgical strikes today will probably be civilian transferred to AgTech and planting seeds tomorrow. And there is always the possibility of a breakthrough.

Among the many great examples are the Human Genome Project, the Manhattan Project and DARPA Grand Challenge. Terrestrial plutonium enrichment may turn out to be the key to deep space exploration, or space based defense. Early DOE-funded genetics research was specifically interested in answering the question: can we survive radiation induced mutations from the fallout of a nuclear war? And less than two decades after a Humvee from CMU completed 8 miles on its own in the California desert we will see fleets of semi-autonomous vehicles rolling out in major cities.


Apparently Google has decided NOT to pursue this line of work after all:

https://gizmodo.com/google-plans-not-to-renew-its-contract-f...


probably this is the right time to introduce an oath sth like this one

https://github.com/Widdershin/programmers-oath


For people who still don't see any problem in this, I have to remind you that ultimately the guy on top of the military hierarchy in charge for selecting the drones with AI targets is... Donald Trump. If you're not scared now, I don't know what to do anymore.

At some point, I thought that the main goal of these big startups with tons of claims and wishful thinking was to make the world a better place and help humanity to become a better version of itself. Instead now they create tools and weapons to "reduce collateral damages" in governemental induced wars (which have not solved anything over the last 20 years in Middle East but only increased cash flow in West pockets, made tons of death, both there and in West countries), and they have access to information related with most of the citizens at the same time. OK cool.

In 10 years we will talk about whistleblowers killed by drones using AI as well.


> I have to remind you that ultimately the guy on top of the military hierarchy in charge for selecting the drones with AI targets is... Donald Trump

How can we be sure that this is so?

- Can Trump make military decisions by himself (i.e. not choose among the ones that are offered to him by the military)?

- Hasn't it been established recently that the military have the right to refuse his orders if they are particularly crazy (see e.g. https://www.independent.co.uk/news/world/americas/donald-tru...)?


I wouldn't blame Google for this. It's sole purpose is to make money. So they make money wherever they can.

People voted for a government that bombs other people, that's a consequence.


People love to talk about Huawei's links to the People's Liberation Army, but when Google starts working with the US military, it's somehow different. Hmm...


China doesn’t have the same (recent) history of ‘military adventurism’.


Google developed a far left employee culture and then expected their employees to help the military.

They don’t seem to understand how those two things are mostly incompatible.


If Google refused to work with our military, Microsoft's subpar AI and ethics would power the drones and the project would move ahead.


Having to deal with UI developed by Microsoft is the only thing that could ever make me feel sorry for drone operators.


"I Have No Mouth, and I Must Scream" by Harlan Ellison

> The story takes place 109 years after the complete destruction of human civilization. The Cold War had escalated into a world war, fought mainly between China, Russia, and the United States. As the war progressed, the three warring nations each created a super-computer (with AI) capable of running the war more efficiently than humans.

> The machines are each referred to as "AM", which originally stood for "Allied Mastercomputer", and then was later called "Adaptive Manipulator". Finally, "AM" stands for "Aggressive Menace". One day, one of the three computers becomes self-aware, and promptly absorbs the other two, thus taking control of the entire war. It carries out campaigns of mass genocide, killing off all but four men and one woman.

> The survivors live together underground in an endless complex, the only habitable place left. The master computer harbors an immeasurable hatred for the group and spends every available moment torturing them. AM has not only managed to keep the humans from taking their own lives, but has made them virtually immortal.

https://en.wikipedia.org/wiki/I_Have_No_Mouth,_and_I_Must_Sc...


This is what T800-101 really looks like https://en.wikipedia.org/wiki/General_Atomics_MQ-9_Reaper



it sucks that companies have this tendency to become nasty business


Norbert Weiner


Do no evil*

*conditions apply


To any Google Employees reading this. Don't worry! There are SEVERAL ways you can reduce any cognitive dissonance you might be experiencing as you read more about the evils your company is committing :

+ rationalization of your own behavior : "I'm not doing this work...I'm working on projects that help people. And ads. But ads are pretty neutral compared to bombing people."

+ rationalization of others' behavior : "Bombing people isn't evil, since it's helping the military kill MORE _bad guys_ and LESS good guys. And gals. and children."

+ change your thoughts about yourself : "Maybe I'm not as good of a person as I thought I was..." [note : this one might be hardest to do!]

+ change your own behavior : Quit Google, or even better organize your fellow workers into some form of collective action (strike!) to remake the company into something that better fits your values and ethics.


Or accept the reality that we live in a world that feels the need to kill people. I think it would be swell to not need a military but the reality is that we do. If you have qualms about working for Google as a defense contractor, consider the implications of living in the US (or in all likelihood wherever you happen to reside). Can you justify paying your taxes? Can you justify voting for one of a handful of candidates for federal government, all of whom are likely to support some form of military action or other?

If a hostile foreign power was to invade your country, what would you want the response to be? To what ends and what efforts should we support the military?

There are many ethically questionable choices we take for granted, especially those of us that live in the West. Everything we consume has a taint of human exploitation, environmental damage, contributes to oppressive regimes that murder people, etc.

Where does it stop?

On the other hand, it sounds like the work Google was doing here was to improve the relative safety of autonomous drone strikes. Oh, but if these strikes remained risky and unreliable, we wouldn’t do them! Believe me, there are a dozen other large defense contractors out there that will implement this if Google doesn’t, so it’s not like it’s a binary proposition.

My point is, working for Google is not the right target on which to pin arguments for or against the ethics of autonomous drone strikes.


You need a military in order to prevent other people from bombing your country. You absolutely do not need to bomb other people, and if fact if your goal is to not get bombed, not bombing other people might be your best option for a bunch of reasons[1]. If your goal is to systematically rob your population of wealth by dumping insane amounts of money into an otherwise mostly useless industry(you don't really need 12 carriers to defend your country for example, they are mostly useless for defense), bombing other people is a good option.

[1] not making additional enemies, not setting the precedent that it's ok to bomb people, not driving away allies, not undermining your narrative that you're a good guy and therefore attacking you is unjustifiable, etc. These are just off the top of my head.


> You need a military in order to prevent other people from bombing your country. You absolutely do not need to bomb other people, and if fact if your goal is to not get bombed, not bombing other people might be your best option for a bunch of reasons.

And sometimes the best way to save lives of your people is to bomb other people.

That’s very naive. Not all antagonists are rational state actors. Terrorist orgs like Al Qaeda or ISIS aren’t going to stop trying to kill you if you ignore them. The only solution is direct action and only question becomes who’s going to do it, i.e. the USA or a local proxy.


ISIS is a direct consequence of US interventions.

The Post-WWII history of the middle east mostly follows the pattern “US declares $X evil. In its fight against $X, US supports $lesserEvil with money and weapons. $Y years later $lesserEvil has used funding and weapons to grow and become $greaterEvil. US declares $W as $lesserEvil and funds them. Repeat.“

Short term that might seem smart, but do it long enough and you have a region full of well armed splitter groups, are least half of which hate you.

The UdSSR/Russia isn't blameless either, but the most notable positive long term effect of US actions in the middle east seem to be more jobs in the military-industrial-complex.


The pattern you're describing is unfortunately a pretty accurate reflection of the post WWII history of the Middle East. However, consider how these mistakes happened.

First there were the Nazis. They invaded half of Europe while the US remained neutral until it was attacked by Japan in 1941. After the US and its allies won the war, the overwhelming majority of Germans celebrated their liberation and went on to become a prosperous ally of the US. Great success.

Then there was the cold war, which was essentially an attempt to prevent the Soviets from achieving their avowed goal of world domination. Again, the US and its allies won that conflict and the overwhelming majority of Eastern Europeans are greatful for it. (Let's not get into a debate about the merits of Socialism here. Soviet style totalitarianism and Stalinism isn't what most on the left want nowadays. At least my Marxist friends don't)

So there were two very important historical episodes during which a refusal of the US to do anything more than defend the homeland would have been catastrophic. Unfortunately, this has lead to a doctrine of uncritical and imprudent interventionism and to a refusal to learn from the many mistakes that were made during the cold war, most importantly in the Middle East (but also in Latin America).

My point is, even though your criticism of failed interventionist policies is correct, it means little without also discussing the crucially important successes of global US military power.


Not just the Middle East. You could say the USSR itself was in fact partially an example of this pattern. The US donated enormous quantities of resources to the Soviet Union during WW2 to beat the Nazis, including significant railway infrastructure; the Cold War began virtually the moment both US and Soviet troops were in Berlin. While it is true that most Soviet weaponry was home-grown, arguably they would not have won without US aid.

In all the major conflicts after that - Vietnam, Korea, Middle East - the US was supporting opponents of the Soviet Union.


>And sometimes the best way to save lives of your people is to bomb other people.

Sometimes, yes.

But those "preventative" bombings in recent decades are the very reason those people want to kill those in other countries. They directly say that they're attacking the west as retaliation, but the vast majority of people plug their ears, dismiss them as absolutely crazy and irrational, and say we'll never understand their motives. Maybe if a few weddings weren't bombed there'd be a few less terrorist attacks around the world.

Terrorist orgs like Al Qaeda and ISIS are relatively recent organizations. They didn't evolve in a bubble. They evolved in a place struggling with the damage of foreign intervention. To many people there, it's irrational not to support the group that opposes the what seems to be (and when looked at objectively) nonsensical intervention between two outside forces. The shit stirring there started as an attempt to push out Soviet influence. The mess in Syria continuing because Russia wants a sphere of influence and the west can't have that.

The tendency of dismissing enemies as irrational is the problem here. Groups don't grow so large with a complete lack of rationality. They have just enough that they persuade a considerable number of people, but they eventually go off the rails.


Mostly, I think a 'terrorist' can be a rational person. If you know for a fact that the USA killed one or multiple people from your family; when you live in fear of clear skies as a male of military age, and as insult to injury (if you're rich enough to know about it all) know that those responsible for your fear and loss say that _you_ are the bad guy. Well, I can very clearly understand the hatred some people feel towards the west/usa/nato.


You should perhaps read on what led to creation of these terrorist organizations. BBCs 'Hypernormalisation' would be a good start.


The single easiest way to create terrorists is to invade another country. Terrorism is a law enforcement problem, it is not a military problem.


> And sometimes the best way to save lives of your people is to bomb other people.

Hello, it's 2030 calling. Yeah, that recent nasty terror attack? Carried out by people born between 2005 and 2012. They grew up in the wartorn cities of Baghdad, Kabul, in the mountains of Afghanistan, in the little towns hit with little "pinpricks" of drone strikes for the past 20 years because the big brains in the USA thought that "BOMBING THEM FIRST" was the best plan. Those young kids who carried out that attack have known nothing but war, while you have known nothing but peace and have the gall to suggest that they and their neighbors should be subject to constant, round-the-clock, death-from-the-sky TERROR. They've known people who got hit by drones. They've known people who were collateral. They've seen the craters. Arms and legs blown off. Brains. Dead bodies. Charred remains. From machines in the sky, operated by people sitting in airconditioned offices half a world away. You don't think they know that? That drone operators sit in airconditioned offices and blow people up? War is fucking hell. And that's been their reality. And our reality has been playing video games. They fucking hate us. God damn, how could they not?

Welcome to the endless cycle of violence your myopia brings.

If you think the best way to defeat ISIS and Al Qaeda is by blowing up random people who seem important now is a good plan, then it's going to be a fail. Al Qaeda and ISIS need to be defeated and expunged by and from CIVILIZATION--their networks dismantled and defunded by detective work, cooperation with local governments, the rule of law, and lots of arrests. They are going to be defeated when the local population is sick of their shit and doesn't believe what they believe. They are going to be defeated when being part of Al Qaeda is a poor alternative in life, because life offers you so much better alternatives, and their ain't shit to be that damn mad about.

There won't be Al Qaeda when people chill the fuck out, on all sides. Instead, everyone has absolutely lost their minds; they are incapable of seeing threats at their true scale. But no, some kind of insanity has taken over the thinking in the military and intelligence agencies where every threat is clear and present, and every single terrorist anywhere is just five days from hitting NYC with a nuclear bomb....


> Those young kids who carried out that attack have known nothing but war, while you have known nothing but peace and have the gall to suggest that they and their neighbors should be subject to constant, round-the-clock, death-from-the-sky TERROR. They've known people who got hit by drones. They've known people who were collateral. They've seen the craters. Arms and legs blown off. Brains. Dead bodies. Charred remains. From machines in the sky, operated by people sitting in airconditioned offices half a world away. You don't think they know that? That drone operators sit in airconditioned offices and blow people up? War is fucking hell. And that's been their reality. And our reality has been playing video games. They fucking hate us. God damn, how could they not?

There's a great quote by Machiavelli that explains this: "There can be no proper relation between one who is armed and one who is not. Nor it is reasonable to expect that one who is armed will voluntarily obey one who is not."

> Welcome to the endless cycle of violence your myopia brings.

Myopia is thinking the world is nothing but sunshine and lollipops. There are people in this world who's entire existence is based on the destruction of everyone else with a different way of life. ISIS falls into that camp and anything short of utter destruction of them and anyone that gives aid or comfort to them will allow them to continue. A cancer like that needs to be fully excised to prevent it from returning.

> If you think the best way to defeat ISIS and Al Qaeda is by blowing up random people who seem important now is a good plan, then it's going to be a fail.

I never suggested that "blowing up random people" is a good idea. I was making that point that in the interests of saving lives of your own people, it can make sense to bomb (and presumably kill) other people. It's not that there won't be any loss of life. It's that doing so will save lives on your side.

> Al Qaeda and ISIS need to be defeated and expunged by and from CIVILIZATION--their networks dismantled and defunded by detective work, cooperation with local governments, the rule of law, and lots of arrests. They are going to be defeated when the local population is sick of their shit and doesn't believe what they believe. They are going to be defeated when being part of Al Qaeda is a poor alternative in life, because life offers you so much better alternatives, and their ain't shit to be that damn mad about.

Sure it's bombs alone that will solve the problem of Al Qaeda and ISIS. Nor did I suggest that they would. But putting pressure on the locals by splitting the world into "with us or with them" camps isn't that crazy either.

> There won't be Al Qaeda when people chill the fuck out, on all sides.

I can assure you that nobody in either Al Qaeda or ISIS will ever "chill the fuck out". Given the opportunity to maim, hurt, or kill anyone in Western society, or more generally anyone that's not they're specific sub-sect of Sunni Islam, then they will seize on it.


I think you're the naive one. How do you think ISIS got started?


You do realize that the actions of the US ended up creating both Al Qaeda and ISIS, right ?


> you don't really need 12 carriers to defend your country for example, they are mostly useless for defense

There is also the deterrence theory.


Yes, the arguments you outline would also be ways to reduce cognitive dissonance as well ("if we didn't do it, others would").

I don't think Google would stop all wars by not contributing to this project. But Google, and the employees who work at Google, has power - a kind of power that's very, very different from my power as an individual when paying taxes or when voting for a federal employee (behaviors that create dissonance for me because they contribute to the war machine). I think that power can, and should, be used to prevent wars - not to make them "cleaner". The small attempt at organizing protests made by some employees at the company was a good step, but the only one I have heard of (maybe there have been others?) and I want the people who have the most access to power to start thinking about what they are doing and trying to make more positive changes.

Consume less. Be more mindful of environmental damage. Speak out against the oppressive regimes that help murder people.


I basically agree with the last points. As to the drone strikes, I believe there is potential there for decent things. Who is to say these would not grant the power to stop a war by executing key individuals in a scenario where a manned strike would be prohibitive? And yet there is a clear moral hazard of facilitating murder. I’m not wise enough to see where the balance lies. I choose not to work on projects like that but I have trouble condemning some of these things out of hand.


> I’m not wise enough to see where the balance lies.

Oh, there is actually a very clearcut line for this; it's coincisively described by the following headline:

"If a Drone Strike Hit an American Wedding, We'd Ground Our Fleet"[⁰]

[⁰] https://www.theatlantic.com/politics/archive/2013/12/if-a-dr...


I don't think its that simple. Weddings may constitute a collection of individuals key to a war effort and in striking a wedding one might prevent the devastation of a prolonged conflict. Also Middle Eastern weddings can occasionally be mis-interpreted as militant given the propensity for arms fire in celebration.

If a drone strike hit an American wedding they would ground their _home_ fleet. Its a key difference. The information for an electorate of a democratic state to make more empathetic choices are available its just a question of whether:

* The electorate are wilfully ignorant of foreign suffering (as its tortuous to read)

* The electorate are kept ignorant intentionally or unintentionally as a product of the conflict for eye balls between media institutions.

* The electorate are not able to make impactful choices due to the type of democracy they have.

I'd argue its a bit of every one of those things.

I prefer to measure things in outcomes and I would posit the major difference in supplying or not supplying arms to a conflict is merely the level of guilt from the origin of supply. People are still going to die be it by drone, missile, mortar, ak-47 or spear. We could argue that arms sales can enable conflict by giving one side an apparent advantage that they choose to capitalise on but that's a messy problem to investigate. Surely to wish to capitalise means that someone has already drawn the lines and is considering hostilities. What makes a war? The lines being drawn or the weapons or a combination of many factors multiplied over time and key events or decisions made by actors on either side? One may decide to war without advantage, one may choose not to war with advantage. So I don't think its as clear cut as weapon sales == war.


>> I don't think its that simple. Weddings may constitute a collection of individuals key to a war effort and in striking a wedding one might prevent the devastation of a prolonged conflict.

Same for a hospital (patching up wounded enemy combatants) or a school (educating future insurgents).

Or a nursery, football stadium, shopping mall, etc etc...


don't be so flippant. That's not at all what I was saying.


I'm not being flippant. You commented that there might be legitimate reasons to bomb a wedding. Well, there are similar reasons to bomb hospitals etc, yet we don't accept that this should be done.

If I misunderstood your original comment, please clarify.


you suggested bombing schools to "prevent future generations of insurgents". You're advocating genocide.

You were being flippant but now you're acting innocent over it which belies a fabricated ignorance that is further insulting.

Conflict and arms is a complex topic and simplifying it to "just don't kill people lol" is crass at best. Either engage with the arguments or go to a school about to be bombed in this perceived reality you've fabricated where I represent your perfect demon to squabble with.

If that wasn't clear enough: I am not fucking advocating genocide.


You said there may be merit to bombing weddings. Why is it any different to bombing hospitals, nurseries, scholyards etc? Those are all civilian targets.

Edit: In the spirit of reconcilliation, I would like to explain that I didn't accuse you of anything, much less advocating genocide (I don't even think that bombing a school is genocide; infanticide, at scale, maybe- but, genocide? I don't see that).

I felt you are trying to do something extremely risky, to justify killing Y civilians by the number X of combatants that would also be killed. I don't think Y > 0 can ever be morally justified and I don't think that a targeted strike against any number of enemy combatants in a civilian area is, either, no matter the tactical outcome. Dropping bombs on the heads of people going about peacefull activities is just wrong, any way you cut it. Hence my comment- we don't accept the bombing of hospitals, regardless of their tactical value as targets, so why would we accept bombing weddings to kill some terrorists?

It's clear you find the idea of bombing hospitals or schools revolting, yet you're OK with bombing a wedding? I don't understand that.


If its a wedding where a large number of key figures will be then its arguable.

Somehow I see that as far more rational outcome than a meeting of key figures in a nursery or a schoolyard unless you're going to war against babes or schoolchildren which suggests you have bigger problems.

Introducing the school/nursery begs the question though:

> If someone surrounds themselves with children 24/7 but commands an army or commits otherwise potentially savage deeds are they immortal in your eyes from a direct attack?

The blurring of combatants and non-combatants happens often in practice, especially as the scales tilt strongly toward one actor. Its easy to argue your point when your the scales are even and between completely non-civilian and professional armies but when those lines are blurred your position makes one completely incapable of action.

For example, if an army took its entire family onto the battlefield as civilians how would you respond? Would you consider the non-combatants civilian? Would you authorise the user of air-power against enemy positions? If a standing army has no barracks and operates from their respective family homes are their families still effectively civilians or just non-combatants and as murderable as an army technician or medic?


>> Its easy to argue your point when your the scales are even and between completely non-civilian and professional armies but when those lines are blurred your position makes one completely incapable of action.

I don't disagree. The current situation is that we're trying to find a balance between the perceived necessity of warfare and the undeniable horror of the consequences of it to civilian populations. It's absurd to think that we can nicely and cleanly separate war against a nation's army from war aginst that nation's people.

The obvious solution would be to not accept the necessity of warfare anymore. But I guess, we're very far from that.


> The obvious solution would be to not accept the necessity of warfare anymore. But I guess, we're very far from that.

One could argue that conflict is a consequence of human behaviour so I don't see that ever being a likely outcome. Nice to see we can somewhat agree. I just feel that coming down hard on either side is missing out on discussing the detail.

I don't see the conflict in Yemen as just but at the same time I feel like countries should have the freedom to declare war on neighbouring countries, especially given an uprising.

Sure, between the lines the Saudi Kingdom is trying to retain its puppet government but this is what we get from having allies that we don't see eye-to-eye with.


Historically, the primary tactic of organised armies was to attack the enemy's infrastructure, including their civilian base. For instance, in ancient times, military campaigns typically ended with the winning side pillaging and razing the defeated enemy's city and taking its civilian population as slaves (the ones who were docile enough- the rest were put to the knife). That was the case for the more merciful campaigners. Others simply slaughtered everyone and burned everything (e.g. the Mongols).

In recent years, we have this notion of battlefield morality, where armies are supposed to fight each other and an invading force is expected to occupy the enemy's territory _without_ proactively harming its civilians. Whether this is actually achieved in practice or not is a different matter (it is not- civilian populations are still the ones that bear the brunt of any invasion).

The thing is, modern morality leans very strongly away from attacking civilians with militaries. Again, whether this is at all possible in practice is a completely different issue, but what is certainly not equivocal, or "arguable" is that civilians must not be attacked deliberately.

You will notice for instance that even when the US bombs weddings or other civilian targets (it's usually the US) they never, ever, under any circumstances try to justify the killings. Instead, they explain the civilian deaths as a mistake and try to explain the mistake (the wedding participants were firing their AK-47s into the air as per custom, the bride's gown looked like an Islamic oriflame, the groom had a suspiciously long beard, or something equally absurd).

In other words, you might consider that bombing a wedding to kill an enemy lieutenant visiting it is "arguable"- but you are very certainly into a minority.

And like I say, that's some very dangerous arithmetic. We don't accept that innocent people should suffer in order to harm a legitimate target. You seem to be arguing that it's just a matter of how many innocents and how many legitimate targets we're talking about. There can clearly be no definitive answer, which opens the gates for a lot of what we currently recognise as war crimes.

Edit: removed a bit about genocide, to avoid further misunderstandings. Apologies.


You were suggesting killing civilians, in order to may be get a 'bad guy', which amounts to a war crime.


and when you bomb a military base you are killing non-combatants. Cooks, mechanics, administrators, a completely un-associated postie or truck driver might be delivering a parcel or supplies to the base.

These do not constitute war crimes but occupy this nebulous grey area named "collateral damage" at best and "war crimes" at worst.

In this case we are assuming that several if not many enemy commanders are attending a wedding and the question is asked whether the loss of lives at the wedding are "worth" the loss of lives through a protracted war that would arguably be the outcome if the enemy commanders continued operations.

I do not claim to have the answers but to merely state that this is not the solved mathematical and ethical conversation that some people assume it to be.


Well, consider where these drones would be most likely used, a lot of these recent bombing campaigns are taking place in Yemen these days, a country that has not attacked the U.S. and yet the U.S. is assisting Saudi Arabia there to conduct the strikes, even the ones where they know the majority would be civilians, including in the use of double-taps.

I'd argue that when you're the aggressor and are assisting a country that is one of the biggest human rights abusers on the planet, you do not get to claim moral superiority or any sort of gray area, you're just wrong and a war criminal, that's why I used the terminology.

I don't think the U.S. is in any position to judge how much someone's life is "worth", especially when not acting in self-defense, as in the case of Yemen, (or any recent U.S. intervention for that matter).

I can sort of see you point in the more general case, but in this case, where this drone would be used by the U.S. for illegal invasions, I can't.


Yeah, I think we have a fundamental difference in our beliefs about the effectiveness of violence (i.e. assassination via drones) to promote world peace. Hard to know what wars have been prevented via assassination, but if you might look into all the evidence of violence continuing or escalating after assassinations (how many "heads" of ISIS have been killed in drone strikes? and check out all the US activity in Latin America...). But that discussion goes beyond the graveyard of a hacker news comments section. In any case, thanks for the thoughtful responses.



> Or accept the reality that we live in a world that feels the need to kill people.

We dream day and night about inventing self-driving cars and about reaching Mars (which are both nice things but without which I can easily live) but we've given up on trying to stop killing each other, and worse of all, we regard it as futile. It hasn't always been like that, for example just look at the Catholic Church's push to bring back peace in the war-torn world of the early Middle Ages (https://en.wikipedia.org/wiki/Catholic_peace_traditions#The_...). For example just look at one of Thomas Aquinas's arguments regarding this:

> Peace must be a central motive even in the midst of violence

Now, look at Google, an almost $1 trillion company which is one of the most successful business entities on this planet and tell me if you can say that "peace is a central motive" for them in the midst of the quest for profit. It isn't.


Thank you. This reminds me of Maria Montessori's words: If one has grown up with a veneration for humanity, one will not consent to become an unconscious, destructive force to destroy humanity. Men will not lend themselves to those erroneous ways which foolishly destroy the creators and maintainers of everything that provides for their existence. They will be unwilling to use the supernatural and universal powers which they possess for a cosmic cataclysm to destroy the fruits of civilisation. Having developed a conscience and sentiment towards human life, they will be incapable of cruelty; for cruelty belongs to a dead soul.


> Where does it stop?

Wherever you want it to. This all-or-nothing approach to morality is stupid. Just because not every decision in your life may be 100% morally justifiable, that does not mean “throw all ethics over board”.


Thank you.


I think as an individual we can decline working on things we consider immoral or evil even if they are legal.

As an individual you may not want your name to be associated to evil(subjective) and you may want peace of mind. Say if my boss would ask me to do something I consider evil but it is legal I think I am strong enough and won't do it, and even advocate for my position. I also think that I have the right to talk with my colleagues and ask what they think and what can we do.

The fact that it should be illegal to kill people remotely without a trial or a declaration of war is a different problem of employees opposing working on "evil" things.


You should go after targets over which you have some leverage.

As a google engineer, you have the ability to impact this particular program in a way that you can’t impact what other military programs do.

Just as an American citizen you have more ability to impact American policy than you do Russian policy.

Make an impact where you have the power to make an impact.

I just quit my job because I was working on a DHS contract and it was getting harder and harder to ignore what they were doing to immigrants.

It’s small, but I don’t want to have to answer for what I was doing to my children.


>> Believe me, there are a dozen other large defense contractors out there that will implement this if Google doesn’t, so it’s not like it’s a binary proposition.

Or for example, if I don't mug this old lady and take her handbag someone else will.

Then again, if I don't do it and you don't do it and nobody else does it, then nobody will do it at all.

Same for AI weapons- if nobody makes them there won't be none.


Nobody? The Chinese military industrial complex is certainly developing AI weapons and will sell them to almost any nation state around the world. They're only a few years behind on technology and catching up fast.


Google was the one that told everyone "Don't be evil". They are a perfectly ripe target for attacks on "ethics".


>> If a hostile foreign power was to invade your country, what would you want the response to be?

This has nothing to do with the use of AI in weapons systems- like it has nothing to do with cluster munitions, landmines, biological or radiological weapons etc.

The US (or any country really) can "defend" itself and its overseas interests without having to resort to such tactics.


Sure. But then, if you live in the "west", please realize the cost of turning on your light bulb or driving your car or wearing your shoes, or turning on the AC, or buying frozen meat. Realize the cost our lifestyle imposes on people elsewhere. The regimes that are protected and the lives we ruin in exchange for us being able to live the lifestyle that we have.

Please don't make me say what I didn't. I'm just asking, as a non googler, am I much better than someone working at google on some of their darker project ? Objectively, everytime I do one of the things I listed above, I kill more people.

My point being, your 4 points don't only apply to google employees. Indeed, at the very least, I feel they also apply to me, a non google, chillin at my house with AC and cold water.

I pick your third bullet point. Because 1 would be lying to myself, I don't believe in 2, and I'm too weak to pick 4: I won't stop using my AC. I conclude I must be a much worst person I thought I was.

On to watching netflix, I need to know what happens next in this one very important show.


I’ll just copy-paste a comment (not mine) from a different subthread: “This all-or-nothing approach to morality is stupid. Just because not every decision in your life may be 100% morally justifiable, that does not mean “throw all ethics over board”.”


[flagged]


I can somewhat see where you're going with this, but I cannot possibly see how it is a reaction to my comment.


Very insightful comment, I suppose you represent the modern, alternative right movement then?

This is a very big problem, you can't have bags like "the left" or "the right" because that's too general to represent any meaningful philosophy.

As an example, many people would argue Clinton is left. I'd argue she's on the center right of the spectrum, if not further and would certainly not refer to her as left-wing in general. I'm not making a judgment here as to which is correct, but it demonstrates pretty clearly that broad labels are not at all helpful.


I seek to represent those interested in a moderated and careful approach to moral dilemmas, where the restraints of ideological extremism and tribalism are left at the door.


I don't think this represents all or even most of the modern left. Just like on the right, the vocal minority seems to have taken the spotlight.


> Sure. But then, if you live in the "west", please realize the cost of turning on your light bulb or driving your car or wearing your shoes, or turning on the AC, or buying frozen meat. Realize the cost our lifestyle imposes on people elsewhere. The regimes that are protected and the lives we ruin in exchange for us being able to live the lifestyle that we have.

Ah the old veiled blame capitalism and blame the west argument. Except for the fact that capitalism has been nothing but a boon for the entire globe, even the non-westerners, and that western traditions and values (which are not necessarily coupled to capitalism) have been instrumental in introducing ideas and institutions that have freed a lot people globally for the first time in all human history.

Capitalism overall has been nothing but a force of good for the world:

- The number of people living in extreme poverty worldwide declined by 80 percent from 1970 to 2006.

- Poverty worldwide included 94 percent of the world's population in 1820. In 2011, it was only 17 percent.

- Globally, those in the lower and middle income brackets saw increases in pay of 40 percent from 1988 to 2008.

- The world is 120 times better off today than in 1800 as a result of capitalism.

- Mortality rates for children under the age of five declined by 49 percent from 1990 to 2013.

Doesn't sound like its ruining too many lives to me. All those people should feel terrible though? Do you think the non-western Chinese sit around and pine about how terrible their people/culture is for turning on lights, wearing shoes, and enjoying climate controlled spaces? No they don't, and they shouldn't, because its ridiculously naive point of view on the world. Instead they are proud of what they've accomplished. Meanwhile some in the west are sitting at Starbucks on their mac books or smart phones whining about how terrible the "west" is while checking to see if their steady paycheck was direct deposited before they uber back to their furnished home or apartment, all without being arrested by secret police for their negative post upon arrival. Fancy that.


As opposed to that parallel universe without capitalism where no improvements happened?

You can't just point to improvements and say "look the system is good". If feudalism had remained the dominant system then we'd still have had some improvements in quality of life, and I'm sure the aristocracy of this hypothetical world would be pointing to that and saying "look feudalism has been nothing but a boon!" just as confidently as you.

The question is, could another system be better? And given that by pretty much any measure, we've had the resources to lift everyone out of poverty for years, and yet haven't used them to do that, I think it's not unreasonable to suggest that another way of allocating resources might be an improvement.

>Meanwhile some in the west are sitting at Starbucks on their mac books or smart phones whining about how terrible the "west" is while checking to see if their steady paycheck was direct deposited before they uber back to their furnished home or apartment, all without being arrested by secret police for their negative post upon arrival. Fancy that.

Meanwhile many people in the west can't afford most of these things, don't have a steady paycheck because they do shift work, live in a trailer, and are often harassed by the non-secret police for no reason. And they're the ones lucky enough to be born in developed economies.


If Google employees strike over this, we must find ways to show solidarity with them. An end to the war machine is within our power!


Alternatively: Military decisions (in the US) are already overseen and approved by political leaders and they can make judgements on a case-by-case basis, rather than blanket war=good/bad.

The US has not had a great track record here, even in my lifetime, but in a world where our adversaries are not going to have these same qualms, I think it is necessary for the US to build this technology, and if you accept that it is necessary for someone to build it, I think holding your nose and saying you want to be as far away from it as possible seems hypocritical.


The convenient part is that many Google employees are already experts at 1-3, since they've been working on mass public surveillance for years.


+ I choose ignorance. “Flying armed drones being trained to think. I can’t see anything going wrong here


Sounds a lot like Google is going to start killing people.


If a company is going to build drone AI in use by the military, I feel MUCH better knowing that company is google and not some poor-quality contracting company.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: