Thank you for the response and I'm sorry for the tone of my earlier comment.
> As your linked article notes, there is wide disagreement on a range of thorny philosophical issues across a range of cause areas, but the scare quotes/selective quoting makes it seem like these views are unquestioningly accepted by a plurality of people in the community.
There is disagreement within the EA community on ethics. However, almost all the disagreements are between different 'denominations' within the church of consequentialism -- questions like population ethics (total vs. person-affecting vs. negative), theories of well-being (hedonistic vs. preference), and distributional justice (utilitarian vs. prioritarian). The fundamental theory of consequentialism is taken for granted by most EAs. As my linked article says, "ultimately, only people who have a good majority of utilitarians in their moral parliaments are going to be able to get on-board with EA." I think this is true to some extent. While there are non-consequentialist EAs, it's hard to deny that the culture of EA is extremely consequentialist.
Even though I like the abstract idea of effective altruism, my value disagreements make me hesitant to trust certain EA organizations. I'm personally a deontological vegan and very concerned about animals, but if I donate to ACE (see section 7 of https://medium.com/@harrisonnathan/the-actual-number-is-almo...) or the CEA Animal Welfare fund, how do I know the money won't go to something I ethically oppose (like pro-habitat destruction advocacy)?
Yeah, I agree that it's much easier to get on board with the fundamental proposition of EA if you're of a consequentialist disposition, because the 'most' in 'do the most good' implies a maximising view. I don't think that it's inherently antithetical to other value systems, but agree that because there are more consequentialists, it's more culturally consequentialist.
The reason we publish fund manager grant history/writeups is so that you can have a sense of what their values are, and can make some calls about whether that accords with your views. Without presuming to speak for him or pre-empt any decisions, I strongly suspect that Lewis is unlikely to grant to anything on the more speculative/controversial side of animal welfare (in general I think it's more likely to focus on corporate cage-free programs and meat replacement tech). We think there are a lot of good reasons not to use the Funds[1], and if you're worried that you're going to end up funding something that's harmful, you shouldn't donate to that Fund.
Just to add my $0.02, my impression is that while many EAs enjoy discussing the thorny philosophical issues like whether we should be concerned about insect-suffering, or wild-animal suffering, very few would advocate that we actually support habitat destruction or massive interventions in nature. Even groups like FRI, who are heavily focused on suffering, promote the idea of moral uncertainty. They believe that we should avoid drastic actions based on a narrow ethical view, due to uncertainty about which ethical views are more valid.
Like with everything, more controversial issues are more likely to be picked up by the media and blown out of proportion, relative to the actual level of support they receive. I would be extremely surprised if any money from the CEA Animal Welfare Fund went to support habitat destruction to reduce wild-animal suffering. I would be less surprised if money from the fund went to support research into animal consciousness, to help us better compare different types of animal welfare interventions.
I'm not from the media and I am campaigning against "effective altruism" because it promotes eco-terrorism, habitat destruction, call it what you want. I am compelled to do this to protect public safety.
There's no point in denying that a large portion of self-identified EAs are for eco-terrorism, it's all over the internet. There is no gray area here, you are either with the terrorists (strong negative utilitarians) are against them.
> As your linked article notes, there is wide disagreement on a range of thorny philosophical issues across a range of cause areas, but the scare quotes/selective quoting makes it seem like these views are unquestioningly accepted by a plurality of people in the community.
There is disagreement within the EA community on ethics. However, almost all the disagreements are between different 'denominations' within the church of consequentialism -- questions like population ethics (total vs. person-affecting vs. negative), theories of well-being (hedonistic vs. preference), and distributional justice (utilitarian vs. prioritarian). The fundamental theory of consequentialism is taken for granted by most EAs. As my linked article says, "ultimately, only people who have a good majority of utilitarians in their moral parliaments are going to be able to get on-board with EA." I think this is true to some extent. While there are non-consequentialist EAs, it's hard to deny that the culture of EA is extremely consequentialist.
Even though I like the abstract idea of effective altruism, my value disagreements make me hesitant to trust certain EA organizations. I'm personally a deontological vegan and very concerned about animals, but if I donate to ACE (see section 7 of https://medium.com/@harrisonnathan/the-actual-number-is-almo...) or the CEA Animal Welfare fund, how do I know the money won't go to something I ethically oppose (like pro-habitat destruction advocacy)?