Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Man this is such a loaded term. Even in a comment section about the origins of it, everyone is silently using their own definition. I think all discussions of EA should start with a definition at the top. I'll give it a whirl:

>Effective altruism: Donating with a focus on helping the most people in the most effective way, using evidence and careful reasoning, and personal values.

What happens in practice is a lot worse than this may sound at first glance, so I think people are tempted to change the definition. You could argue EA in practice is just a perversion of the idea in principle, but I dont think its even that. I think the initial assumption that that definition is good and harmless is just wrong. It's basically just spending money to change the world into what you want. It's similar to regular donations except you're way more invested and strategic in advancing the outcome. It's going to invite all sorts of interests and be controversial.





> Donating with a focus on helping the most people in the most effective way

It's not just about donating. Modern day EA is focused on impactful jobs, like working in research, policy, etc., more than it is focused on donating money.

See for example: https://80000hours.org/2015/07/80000-hours-thinks-that-only-...

Instead, the definition of EA given on their own site is

> Effective altruism is the project of trying to find the best ways of helping others, and putting them into practice.

> Effective altruism breaks down into a philosophy that aims to identify the most effective ways of helping others, and a practical community of people who aim to use the results of that research to make the world better.


> I think the initial assumption that that definition is good and harmless is just wrong.

Why? The alternative is to donate to sexy causes that make you feel good:

- disaster relief and then forget about once it's not in the news anymore

- school uniforms for children when they can't even do their homework because they can't afford lighting at home

- literal team of full time body guards for the last member of some species


>Why?

I already specified why.

>It's basically just spending money to change the world into what you want.

Change isn't necessarily good. I think we can all rattle off a ton of missions to change throughout human history that were very bad, and did not even have good intentions. On top of that, even in less extreme cases, people have competing conceptions of the good. Resolving that is always going to include some messiness.


That's a strawman alternative.

The problem with "helping the most people in the most effective way" is these two goals are often at odds with each other.

If you donate to a local / neighborhood cause, you are helping few people, but you your donation may make an outsized difference: it might be the make-or-break for a local library or shelter. If you donate to a global cause, you might have helped a million people, but each of them is helped in such a vanishingly small way that the impact of your donation can't be measured at all.

The AE movement is built around the idea that you can somehow, scientifically, mathematically, compare these benefits - and that the math works out to the latter case being objectively better. Which leads to really weird value systems, including various "longtermist" stances: "you shouldn't be helping the people alive today, you should be maximizing the happiness for the people living in the far future instead". Preferably by working on AI or blogging about AI.

And that's before we get into a myriad of other problems with global aid schemes, including the near-impossibly of actually, honestly understanding how they're spending money and how effective their actions really are.


>it might be the make-or-break for a local library or shelter. If you donate to a global cause, you might have helped a million people, but each of them is helped in such a vanishingly small way that the impact of your donation can't be measured at all.

I think you intended to reproduce utilitarianisms "repugnant conclusion". But strictly speaking I think the real world dynamics you mentioned don't map on to that. What's abstract in your examples is our grasp of the meaning of impact on the people being helped. But it doesn't follow that the causes are fractional changes to large populations. The beneficiaries of UNICEF are completely invisible to me (in fact I had to look it up to recall what UNICEF even does), but still critically important to those who benefit from it: things like food for severe malnutrition, maternal health support absolutely are pivotal make-or-break differences in the lives of people who get it.

So as applied to global initiatives with nearly anonymous beneficiaries, I don't think they actually reproduce the so-called repugnant conclusion, though it's still perfectly fair as a challenge to the utilitarian calculus EA relies on. I just think it cashes out as a conceptual problem, and the uncomfortable truth for aspiring EA critics is that their stock recommendations are not that different from Carter Foundation or UN style initiatives.

The trouble is their judgment of global catastrophic risks, which, interestingly, I think does map on to your criticism.


There's EA initiatives that focus on helping locally, such as Open Philanthropy Project's US initiatives and GiveDirectly's cash aid in the US. Overall they're not nearly as good in terms of raw impact as giving overseas, but still a lot more effective than your average run-of-the-mill charity.

> It's basically just spending money to change the world into what you want.

Oh, god forbid people try to change the world, especially when the change they want to see is fewer drowned children. Or eliminating malaria.


On one hand, it is an example of the total-order mentality which impregnates society, and businesses in general: “there exists a single optimum”. That is wrong on so many levels, especially with regards to charities. ETA: the real world has optimals, not an optimum.

Then it easily becomes a slippery slope of “you are wrong if you are not optimizing”.

ETA: it is very harmful to oneself and to society to think that one is obliged to “do the best”. The ethical rule is “do good and not bad”, no more than that.

Finally, it is a receipt for whatever you want to call it: fascism, communism, totalitarianism… “There is an optimum way, hence if you are not doing it, you must be corrected”.


I'm not sure where you found this idea - I don't know any EAs claiming there is a single best optimum for the world. In fact, even with regards to charities, there are a lot of different areas prioritized by EA and choosing which one to prefer is a matter of individual preference.

The real world has optimums, and there's not a single best thing to do, but some charities are just obviously closer to being one of those optimums. Donating to an art museum is probably not one of the optimal things for the world, for example.


It's a layer above even that: it's a way to justify doing unethical shit to earn obscene amounts of money by convincing themselves (and attempting to convince others) that the ends justify the means because the entire world will somehow be a better place if I'm allowed to become Very Rich.

Anyone who has to call themselves altruistic simply isn't lol




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: