Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
What do longtermists want? (backreaction.blogspot.com)
27 points by paulmooreparks on Oct 31, 2022 | hide | past | favorite | 29 comments


> " Not just for the next 10 or 20 years. But also for the next 10 thousand or 10 billion years"

Lets run a mental experiment, shall we? 10 thousand years ago we were cavemen. Imagine a caveman trying to plan what should happen today. How good would his plans be?

"To ensure human survival, you should ensure Wooley Mamoth population doesn't collapse?"

Okay, cavemen were uneducated you might say, too primitive to plan. Fine.

Get Socrates, educated man and on the smartest people in history, could he plan two thousand years ago for what we should be doing today? Could he predicr threats of climate change, nuclear war, AI, what have you?

No, he could not and its only two thousand years.

Where, the farthest back in time, do we find people who could make any meaningfull plan for today? Probably less than 100 years ago.

So there are two possible conclusions:

1 Elon musk is 100 times smarter than socrates, enabling him to think 10,000 years ahead and predict threats we have not yet conceived. Because he is so used to planning in thousands of years, he is struggling to plan when his Tesla Autopilot will finally be ready or his new product will be released.

2 Society and technology has reached a steady state, and withing the next 10,000 years there will be less change than we experienced in the last 100.

3 This is just a pseudo-intellectual excersise by folks who also happen to be arrogant and calous.

What I find to be very pretentious is arguing you can predict anything on the timescales of the universe when the laws of physics behind dark matter, dark energy and Gravity on quantum scales are unknown. For all we know universe could reverse expantion and end in a Big Crunch in 1 billion years.


And yet, we still look to Socrates for wisdom today. We reference his ideas, we guide our actions based on his wisdom. Does he know about Ai? Nope. Does that make him irrelevant? Nope. His ideas of virtue and abstraction are still totally relevant and taught in colleges and universities throughout the world. It is precisely by thinking beyond ourselves that we touch the infinite.

And it would be perfectly reasonable for Socrates to say “Wouldn’t you say that it is better to be wise than not wise? And of all the creatures on this earth isn’t there but one capable of wisdom, or at least capable of presenting an intelligible account of wisdom? And if those two things are true. And in order for the light of wisdom to persist and spread, is it not best to preserve the only creature capable of wisdom? Should we not then make every effort to spread that creature and wisdom itself as widely through time and space as is possible?”

I suspect Socrates would have identified as a longtermist.


Still learning about the tenets of this philosophy here, nor do I have a position on it yet, so might get this wrong... However, isn't there a more charitable argument you could make that looks something like this:

4. Unlike with cavemen and Socrates' time, we now have decisions to make that could lock us out of continued proliferation of human life in the future. As such, the longtermists are bringing attention to those, and trying to help us to make the right decisions, instead of focusing resources on short-term returns.

A good example they might support would be spending more money on space tech and less on food for Africa. Resources aren't limitless, and some aren't renewable at all, so we have to use those wisely to position our heirs such that they could take the next step.

They might also make the slightly more tenuous claim that since we know more about the nature of the universe, there could be some invariants that we'll certainly need to see through this vision (e.g., establishing self-sustaining life off the planet). Those will be true in the future, just as the caveman could have assumed we would still need food and shelter of some kind in present day.


> A good example they might support would be spending more money on space tech and less on food for Africa. Resources aren't limitless, and some aren't renewable at all, so we have to use those wisely to position our heirs such that they could take the next step.

1. I am not sure whether I can get behind the idea of "long term species survival" if that means that those who survive will be absolutely awful creatures.

2. What makes you think that prioritizing space travel over feeding hungry people will not lead to some sort of societal upheaval that will destroy life on the planet just weeks before the ships take of?


Since they have formulae to quantize this, they would probably weigh the marginal utility of, say, a billion less hungry people now against a -1% chance reduction of annihilation of the human race, and say the latter is a clear winner.

Maybe you can count a cultural shift as just another parameter. They seem to weigh survivability super high, above anything else even. The argument there is likely that nothing matters if we aren't alive to ask these questions. I doubt they're considering "societal upheaval" as a species threat, but if it got bad enough to be on their radar, perhaps they would support (normally unpalatable) solutions to that too.

It's all about us reweighting our parameters such that our heuristic functions evaluate towards branches that ensure long-term species survival, and the argument that we're doing the opposite of that now in many areas due to short-term, emotional reasoning.

Caveat: Not a longtermist here, just projecting based on what I've read of them so far.


To be fair to the longtermists, any non-zero chance of survival beyond Earth is vastly preferable to zero.

Especially since we know there exist scenarios, that will occur eventually, that would guarantee an end to human civilization limited to a single planet, such as comet impacts, gamma ray bursts, etc.


To be fair to the rest of us, this has to be balanced against any detrimental effect longtermist plans would have on long-term survival on Earth. A .002% chance of making it to the stars probably isn't worth a 4% lower chance of keeping Earth habitable.


Nearest M class planet is how many lifetimes away? How many earths worth of resources must be consumed just to safely transport bacteria there?


> A good example they might support would be spending more money on space tech and less on food for Africa. Resources aren't limitless

In addition to being callous, it's very shortsighted, which is ironic - a developed and politically stable Africa would be making big contribution to a space program, and it has made huge economic strides in the past 20 years.

I am almost getting the feeling that lack longterm thinking among longtermists indicates that it is just being used as a facitious justification for political position it's leaders held a priori


I like the idea of planning to colonize the galaxy, etc.. but as a humanist that should come second to ensuring the most upward mobility of earths inhabitants, if we can't maximize happiness, and get some bleak dystopian world order, what's the use existing and keeping existing?

Longtermism (the concept of us maximizing humanity's existence) is okay with humanism and effective altruism leading the charge, without those it's just veiled fascism, kill the billion to save 10 billion in the future sounds like a war cry from Nazi Germany, and is dangerous to allow to spread.


One of the things that worries me about this philosophy is it's callousness to the people currently living, a longtermist may decide (and try to enact a plan) around the fact that relieving the world of a few billion people gives humanity the greatest chance of survival.

The last time rich people said it was okay to kill people in the name of the greater good, a World War had to shut that shit down. This is just disguised fascism and it's dangerous.

I care that we exist a billion years from now, I can adopt some longterminism, but I'm also a humanist and what's the point of humanity continuing if we can't solve poverty, and make it so everybody has a decent quality of life - all things a spacefaring civilization or (one day) should be able to aspire to.

Before we plan for a billion years from now, we should first figure out how to comfortably support the billions we have now and plan for maximum comfort for the poorest person, not full socialism, but surely better social safety nets in all countries and better democracies.

Our society is so flawed, hell making it through the end of this century seems like a 50/50 proposition, so lets make here and now better.


A counter point would be that significant chunks of the world profess to guide their lives by a book human men originally put together over a thousand years ago.

so is it really far fetched to that chunk of humanity that one can plan a thousand years?


And yet we aren't stoning adulterers or eagerly awaiting the majority be cast into a lake of fire.

Memes can live a long time, though adherence to the crazier ones will vary widely.


10 billion years is more than the age of the Earth. It's beyond arrogant to plan that anything that we do now will be relevant by then. After all, for more than half the last 10 billion years the Earth didn't exist yet.


> We find that the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point is worth a hundred billion times as much as a billion human lives [in the present]

Good luck parsing that claim on the first try. Reducing the probability of extinction by 1/1000000000^2% is worth 1000000000 human lives? Ok so I guess you’re trying to say 1 human life is worth a billionth of a percent chance of extinction.

The author here isn’t exactly praising this perspective but has the audacity to write

> Hey, maths doesn’t lie

Which is utterly idiotic and such a misleading thing to say. This isn’t math, it’s a thick layer of hand wavey assumptions and tiny sprinkling of math to throw it together.

It’s not like we have any way of actually forming any probability distributions of outcomes for the unforseeable future outcomes so the whole thing is moot. In general however, short term prosperity is likely to compound into better outcomes more than anything.


As I understood it, 'Hey, math doesn't lie' was a sarcastic jab to point out how ridiculous it would be to believe the argument just because it contains a calculation.


Maybe. The way he counters with

> Hey, maths doesn’t lie, so I guess that means okay to sacrifice a billion people or so. Unless possibly you’re one of them.

Implies to me he is implying the math is sound but has a different problem.

> But I don’t want to be unfair, Bostrom’s magnificent paper also has a figure to support his argument that I don’t want to withhold from you, here we go, I hope that explains it all.

This comment seems very demeaning towards the math though so I can’t really tell


yeah it's probably 'she' not 'he' for this particular person but whatever.

No the point being made is more like that you can compute something doesn't mean you're right even if your math is correct. That goes for a lot of physics and also for a lot of statistics. Second the conclusion drawn by Bostrom et al is hilarious in a very bad way, and also completely beyond science and mathematics. Third, the argumentation is also ridiculous b/c in history as in weather and a lot of other fields we're dealing with a highly dynamic, basically chaotic system. Fourth, we can't predict what the future will be like in a year or in ten years, but here there's a bunch of rich technocrats who want me to swallow that it's OK to sacrifice hundreds and hundreds of millions of people for the greater good of mankind in a billion years? Bro do you even count? You don't even know how a big a number a billion is, and I can't tell you either. Lastly, these people are throwing around their assumption-laden probabilistic formulas like there's no tomorrow [pun intended]. Fact is we can't be sure of any of the assumptions or their probabilities, nor what they mean. People already get stuck with everyday tasks of risk assessment.

> seems very demeaning towards the math

Get over it. It's OK to ridicule the math of someone who makes an absurd argument and abuses the math for dangerous, delusional and nefarious purposes, as the longtermers do. This is fascism in every way but name.


>”Personally I think it’s good to have longterm strategies. Not just for the next 10 or 20 years. But also for the next 10 thousand or 10 billion years. So I really appreciate the longtermists’ focus on the prevention of existential risks. However, I also think they underestimate just how much technological progress depends on the reliability and sustainability of our current political, economic, and ecological systems.”

Agreed.

I think if we did a good job on ameliorating the current systems, we would also greatly mitigate future risk. The benefit would not be those future generations only, but for those also experiencing life currently.


> Not everyone is a fan of longtermism. I can’t think of a reason why. I mean, the last time a self-declared intellectual elite said it’s okay to sacrifice some million people for the greater good, only thing that happened was a world war, just a "small misstep for mankind."

Our culture, in general and implicitly, rejects the idea of objective truth in favor of hedonism.

If we're living for pleasures of the moment, then fretting about the unborn seems silly. Look at government borrowing: we're reducing future generations to penury because of whatever folly sounded good for the election.

Neither "Effective Altruism" or "Longtermism" seem more than fads, and for the same reason: without an internal renewal based upon some immutable, external truth, these movements shall not outlive their founders.


`it’s okay to sacrifice some million people for the greater good'

So now we will loosel millions of people to climate change for no reason at all. What a relief


Seems like for a 'longtermist' something like climate change is such an insignificant blip (and one that will 'inevitably be solved by technology once it becomes a problem ') that it doesn't even bear thinking.

I think that's sort of the point though, it makes you sound super duper intellectual and you can do weird stuff on behalf of the 10 million year view while hand waving away responsibility for the near term with futurist fiction.


If your predictions are correct, fine. How exactly we test them on the typical human scale is unclear.


> for no reason at all

Never underestimate the value of making those in charge feel better about themselves.


Well my life, the life time of my children and their's. Also my friends and other family. To preserve living conditions for this would be long term for me.

This time span is much more important that quarterly reports or political elections.

Of course if we do not fuck up as a species (climate, war, etc.) and fail in this century, more humans will be born over the next few centuries than ever existed. These unborns have no voice. They inherit what we do with this planet and our society. Ethically I would say, to preserve their living conditions is the far long term and should also be considered.


The Sun will become a red giant in 5 billion years - we not going to get off this the rock - which is a good thing for the universe I reckon.


I dont think a universe full of lifeless rocks and deadly radiation has concept of good and evil.


Indeed - just a pale blue dot.


Longtermists want to be praised and revered for how smart they are.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: