Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> you have to have made money from a capitalistic venture, and those are inherently exploitive.

You say this like it's fact beyond dispute, but I for one strongly disagree.

Not a fan of EA at all though!





Fair to disagree on that point, but I think the people who would find the EA supporters “morally questionable” feel that way for reasons that would apply to all rich people. I would be curious to hear what attributes EA supporters have that other rich people don’t.

I think the idea the future lives have value, and the value of those lives can outweigh the value of actual living people today is extremely immoral.

To quote[1]:

> In Astronomical Waste, Nick Bostrom makes a more extreme and more specific claim: that the number of human lives possible under space colonization is so great that the mere possibility of a hugely populated future, when considered in an “expected value” framework, dwarfs all other moral considerations.

[1] https://blog.givewell.org/2014/07/03/the-moral-value-of-the-...


Isn't this just the Thanos argument, though? Given the huge number of possible future lives under space colonization, all of them ending inevitably in death and suffering, no amount of trying to improve those lives can ever has as much of a positive impact as just avoiding them by pushing for, say, nuclear self-annihilation now, because the somewhat larger suffering for a much, much smaller number of people is a higher "expected value"? I'm not really keen on moral arguments that end up arguing for nuclear war…

> I think the idea the future lives have value, and the value of those lives can outweigh the value of actual living people today is extremely immoral.

This is an interesting take. So if we found out for certain that an action we are taking today is going to kill 100% of humans in 200 years, it would be immoral to consider that as a factor in making decisions? None of those people are living today, obviously, so that means we should not worry about their lives at all?


The extreme form of the argument ("don't worry about the future at all") isn't what I'm saying. It is also immoral to not consider the future.

But to put future lives on the same scale (as in to allow for the possibility of measuring one against the other) of current lives is immoral.

Future lives are important, but balancing them against current lives is immoral


For very much money, as in, let's say, more than 1000x the median person in the wealth distribution, I'd say it's obviously true.

You cannot make 1000x the average persons wealth by acting morally. Except possibly winning the lottery.

A person is not capable of creating that wealth. A group of people have created that wealth, and the 1000x individual has hoarded it to themselves instead of sharing it with the people who contributed.

If you are a billionaire, you own at least 5000x the median (200000k in the US). If you're a big tech CEO, you own somewhere around 50-100,000x the median. These are the biggest proponents of EA.

The bottom 50% only own about 2% of the wealth anymore, the top 10% own two thirds of the wealth, the top 1% owns a whole third and it's only getting worse. Who is responsible for the wealth inequality? The people at the right edge of the Lorenz curve. They could fix it, but don't, in fact they benefit more from their workers being poorer and more desperate for a job. I hope that explains the exploitation.


> You cannot make 1000x the average persons wealth by acting morally. Except possibly winning the lottery.

The risk profile of early startup founders looks a lot like "winning the lottery", except that the initial investment (in terms of time, effort and lost opportunities elsewhere as well as pure monetary ones) is orders of magnitude higher than the cost of a lottery ticket. There's only a handful of successful unicorns vs. a whole lot of failed startups. Other contributors generally have a choice of sharing into the risk vs. playing it safe, and they usually pick the safe option because they know what the odds are. Nothing has been taken away from them.


The risk profile being the same does not mean that the actions are the same. The unicorns that make it rich invariably have some way of screwing over someone else., Either workers, users, or smaller competitors.

For Google and Facebook, users' data was sold to advertisers, and their behaviour is manipulated to benefit the company and its advertising clients. For Amazon, the workers are squeezed for all the contribution they can give and let go once they burn out, and they manipulate the marketplace that they govern to benefit them. If you make multiple hundreds of millions, you are either exploiting someone in the above way, or you are extracting rent from them.

Just looking at the wealth distribution is a good way to see how unicorns are immoral. If you suddenly shoot up into the billionaire class, you are making the wealth distribution worse, because your money is accruing from the less wealthy proportion of society.

That unicorns propagate this inequality is harmful in itself. The entire startup scene is also a fishing pond for existing monopolies. The unicorns are sold to the big immoral actors, making them more powerful.

What is taken away when inequality becomes worse is political power and agency. Maybe other contributors close to the founders are better off, but society as a whole is worse off.


The problem with your argument is that most organizations by far that engage in these detrimental, anti-social behaviors are not unicorns at all! So what makes unicorns special and exceptional is the fact that they nonetheless manage to create outsized value, not just that they sometimes screw people over. Perhaps unicorns do technically raise inequality, but by and large, they do so while making people richer, not poorer.

Could you please back that up with some evidence. Right now you're just claiming that there are a lot of anti-social businesses but that unicorns are separate from this.

That's quite a claim, as there's a higher probability of unicorns screwing people over. If a unicorn lives long enough it ends up at the top of the wealth pyramid. As far as I can tell, all of the _big_ anti-social actors were once unicorns.

That most organizations engaging in bad behavior aren't unicorns says nothing, because by definition most companies aren't unicorns. If unicorns are less than 0.1% of the population of companies X, then P(X | !unicorn(X)) > P(X | unicorn(X)) is almost guaranteed to be true for all P.


I think Yvon Chouinard has acted morally throughout his career. His net reported wealth was $3B before he gave his company to the trust he created.

He's far from the only example.

I understand the distribution of wealth. I agree that in the US in particular it is setup to exploit poor people.

I don't think being rich is immoral.


You think the wealth inequality is set up to exploit poor people, but you don't think contributing to the wealth inequality is immoral.

That's an interesting position. I would guess that in order to square these two beliefs you either have to think exploiting the poor is moral (unlikely) or that individuals are not responsible for their personal contributions to the wealth inequality.

I'm interested to hear how you argue for this position. It's one I rarely see.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: