Hacker News new | past | comments | ask | show | jobs | submit login
The Tranhumanist Cult Test (ewanmorrison.substack.com)
39 points by thinkingemote 83 days ago | hide | past | favorite | 117 comments



I guess it was inevitable. Any philosophy oriented toward a better tomorrow eventually accrues passionate evangelists, and with enough groundswell verges on becoming a movement, often manifesting at least one cult or political party in the process.


^ If your movement primarily attracts zealots it's either not ready yet or it's done.

At the beginning of a movement when nothing can really be done yet you mostly only attract zealots. At the end of a movement when you've achieved many of your goals, you only have zealots remaining.


> At the beginning of a movement when nothing can really be done yet you mostly only attract zealots.

I would say they are not zealots but power hungry people who enjoy clout and power. They adopt the movements ideology like a uniform - something to wrap one self in to signal others - and become the zealot like an actor becomes a character. I am sure many dont even care about the ideology, as long as it gives them more than the other guy. Many of them feel that they are better than others so they should be in charge. I knew people like that, control freaks who want everything in their life to work the way they want it to by controlling the actions of those around them - bending others to build their world. No thanks.


I'd argue that many zealots aren't true believers of what they show outwardly, and that doesn't make them not zealots.


In case like me, you were also wondering what Transhumanists are supposed to believe in (what the article says is):

> transhumanists have deepened their belief in a fated future in which the human species will achieve “augmented” evolution through fusing with machines, leading to the emergence of an artificial superintelligence that will far outstrip all human knowledge and achieve God-like powers (The Singularity). This digital deity will lead us to a new era, in which all human biological limitations will be transcended; bringing about end to sickness, suffering and even death, and leading us to colonise the cosmos


Well, when you put it like that, it doesn't not sound like a cult at all.


Phrasing does a lot of things.

If I rephrased the quoted passage as:

> transhumanists have deepened their hope for future in which the human species will augment evolution by fusing with machines (as we already do with brain-computer interfaces, but better), and the potential that current AI — which is already helping us decode proteins, perform and write up scientific research — will continue to improve, helping us to expand our medical knowledge even faster and help us develop cures for things currently beyond us. As the AI we already have has allowed us read the complete connectome — neural structure — in small animals, there's even a possibility we may expand this to human brains (though we don't know how long this will take us to do!), and if we manage that, then it would allow us to back up and restore our minds as we already do with digital documents.

Doesn't seem nuts, unless you believe humans have souls that can't be digitised.

That said, I'm skeptical about any specific future: when the only flying machine you've heard of is an airship, it's easy to imagine a future filled with them — what's our airship? What's the thing everyone expects but which won't happen?


> Doesn't seem nuts, unless you believe humans have souls that can't be digitised.

Let’s ignore souls: humans have bodies which cannot be digitised. Our bodies are real matter, not some dross to be jettisoned.

I wonder if Black Mirror ever did an episode about someone unwillingly personality-cloned into a computer, then killed, with everyone around him acting as though this were normal and he — still conscious — was a non-person?


Right now, nobody knows from whence personhood rises. There's not even any way to prove that all natural humans have the same thing whenever we speak of consciousness — indeed, we know that our conscious experiences vary wildly, not only between individuals but for any individual over time, and there's 40 or so competing definitions of the word.

Some of the stuff that happens to the body clearly impacts our states of consciousness. Blood-alcohol levels, for example. So if I take your statement at face value: OK, so you simulate at least some of the rest of the body, perhaps all of it. So what? Still a simulation.

You can also think of it this from the opposite direction, where someone might try to argue against a purely material origins of consciousness by saying "the brain doesn't work if you take it out of the body" — that this is in fact true, doesn't mean the brain isn't the bit doing the thinking.

You can take Searle's position (why yes, I do have a philosophy qualification, albeit a mediocre one): "No one supposes that computer simulations of a five-alarm fire will burn the neighborhood down or that a computer simulation of a rainstorm will leave us all drenched."[0], but that only works if a mind is a substance of its own kind rather than the emergent result of the processes that gave rise to it. The components of a simulated rainstorm will still interact with each other as those of a real one, if given sufficient fidelity.

Perhaps minds are emergent from some quantum shenanigans, as Penrose says[1]. Then it would take a quantum computer rather than a classical one[2].

> I wonder if Black Mirror ever did an episode about someone unwillingly personality-cloned into a computer, then killed, with everyone around him acting as though this were normal and he — still conscious — was a non-person?

What you describe is a P-zombie, and as https://en.wikipedia.org/wiki/Philosophical_zombie will show, there's no consensus over this even amongst philosophers.

[0] https://en.wikipedia.org/wiki/Synthetic_intelligence#:~:text...

[1] but it would be horrifying if he's right specifically about microtubules, because those are in basically every living cell, and hence if they do mediate consciousness then potatoes would be conscious: https://en.wikipedia.org/wiki/Orchestrated_objective_reducti...

[2] While it's technically possible to simulate quantum processes classically with an exponential slowdown, those kinds of technicality also make the entire universe and everything in it into a Markov chain, I prefer to only point out such things to show the limitations of making such arguments.


How do you get from “ Right now, nobody knows from whence personhood rises” to (paraphrasing) “once we can recreate the connectome, we can back people up?”

If we don’t know what it even is, why is it so inevitable that we’ll be able to recreate it in another medium?


My original comment said:

> connectome — neural structure — in small animals, there's even a possibility we may expand this to human brains (though we don't know how long this will take us to do!)

I don't know how to phrase it as any less "inevitable" without saying it's "impossible".

That said, a restore from backup may be easier than understanding/accurately simulating at exactly the correct depth to do whatever the important thing turns out to be and not waste effort on unnecessary things*: we've known for a long time that it's possible to encourage neurons to grow in certain ways with electrical stimulation. At some point in the research process, I expect someone to try to use this to replicate an existing pattern (probably in something simple like a fruit fly) — if that organism seems to remember things that only the original had learned, that would be interesting.

I expect that people will do the same test with the digital version of the connectome and eventually get it to also remember something from the original living brain, and that people will even then continue argue that the the digital version is still missing something. And that this argument will continue even if the simulation is scaled up to a human.

And even if we can not simply store the copy but allow it to develop in the simulation (human brains do change over time), I expect these arguments to continue even if that digital human connectome simulation learns something, then is downloaded into an organic brain in the fashion I have described, and the new living human brain discusses things that only the simulation had experienced.

Right now, on the topic of personhood, we don't even know the right questions.

* in one extreme, a computer can simulate any physical process. Right now, it looks like quantum mechanics is sufficient for all chemistry, and hence all biology, and hence brains. But it may be wildly excessive to simulate all the way down to that level — we don't know, we have much to learn. On the other, neural nets in AI are widely recognised to be toy models, and lots of people assume they're too simple to explain biological cognition — again, we don't know, we have much to learn.


It doesn't seem nuts, it just seems to miss the point, even ignoring souls. No matter how perfect the copy it'll still be a new instance, and the original meat brain instance of myself will one day fail and that consciousness will cease. A copy may fire up and it may be perfect could be a brand new "me", but the consciousness that is the running instance of me will be no more. There's no escape from that. I don't care about copies of myself running around, I'm not that egotistical to think the world needs more of me, but the eternal dark of death scares me, and duplication does nothing to assuage those fears.


This already happens to you every time you go to sleep and wake up. You have absolutely no way of discerning whether the consciousness you are now experiencing is a new consciousness created by your body when you woke up, or whether it is the same consciousness as before you went to sleep.

A body produces consciousness, consciousness does not 'inhabit' a body.


This is not factually true, it’s a philosophical statement.


Some care about that, others don't.

The eternal dark that I can't experience doesn't scare me, but I'd still like to have a backup — or at least, I did, then I started seriously considering what else can be done with those backups besides restoring from them…

I probably still would, but I'm much less optimistic about people than I used to be.


A backup of what? And why do you want a copy of yourself? To what purpose?

And yeah, wait for someone to create 1,000 instances of you to do whatever they want with.


Backup of my brain. Lost two relatives to Alzheimer's, seems plausible that I might benefit from a freshly printed brain in 30 years, even if the scan, reprinting, and transplant are all done on the same day under general anaesthetic.

(Today such a thing is totally implausible; perhaps it still will be in 30 years, but I'll only find out by living until then).

I'm sure from your previous comment that you don't see that as a continuation of self, but I do see it as such.

Similar works for any potentially dangerous activity.


Which is gross mischaracterization. Many transhumanists do not think the singularity is real. It is just a prediction of simplified model, where in reality technological progress would hit limitations not accounted in the model. And even if the singulary happened, it would be unpredictable what it would bring.


This is what I believe your pal Scott calls a "motte-and-bailey argument."


Who is doing which?

Wikipedia says: """ […] one modest and easy to defend (the "motte") and one much more controversial and harder to defend (the "bailey") […] """

The claim "the article is mischaracterising transhumanism" is saying the article's definition of transhumanism is a bailey, surrounded by the motte of "huh, these ideas sound a bit like Christianity and Revelations…"

I don't know the actual distribution of transhumanists who believe in any specific outcome of the singularity, but I would at least point at the simultaneous existence of e/acc (who think it will be good) and LessWrong (who mostly think it's going to be misaligned, meaning it goes off and does its own thing even when we don't want it to).

But even the most extreme ideas of what might be possible are still couched in terms of atoms, not magic.


People get hung up on the Pascal's Basilisk stuff, but "hard-takeoff singularity" is also semiotically indistinguishable from the Christian eschaton. Someone who shapes their life around the belief this is possible definitionally participates in what we need not also recognize as a Christian heresy - though a well-catechized Christian would - to recognize for the new religious movement (itself a term of art) which it is.

That's the bailey. The motte - where one such as my prior interlocutor seeks resort when challenged on the obvious and barely disguised roots of a faith no longer stabilized by the tradition from which it was severed - is "atoms, not magic" and "transhumanists don't actually believe the singularity is real." No actual defense of the concept, on which enough enthusiastically favorable not to say Marinettiesque ink has been spent over the last quarter century to exhaust an entire universe of squid. Just a Gish gallop of excuses, in the hope this will serve the role of the squid's own obscurantist jet - not yet, I'm sorry to say.


Back when I was at school, I actually read the Book of Revelation[0]. I don't see anything that matches it to hard-takeoff — when AI folks talks of gates, it's the logical kind, not what existed when Revelation was written; nothing about seals, final battles, falling stars, imprisoning of seven-headed dragons in bottomless pits, etc. — while I wouldn't be surprised to see some connections (given how little imagination most people show), I just don't see any meaningful connection with any singularity stuff to the eschaton I was raised with.

So, I assume you mean something of a non-Catholic eschaton? And specifically, you have in mind something from a denomination that, regardless of what it says about biblical literalism, doesn't actually pay much attention to what's written in Revelation?

Hard-takeoff only differs from soft-takeoff in the speed of events; given how slow Revelations is ("thousand years" of peace, which I assume was written to mean "a really long time" and not even intended literally), I think that if anything, slow-takeoff is a closer fit than hard-takeoff.

Also, and this is kinda important, the Christian eschaton seems to assume the "good guys" will win[1]. AI-driven singularities just says that some AI — possibly plural, possibly singular — will win, but doesn't tell you anything about the morality of the AI, or if it's a replicated single mind, or a pantheon of different minds. The default assumption from many is more like the Cthulhu Mythos than like anything recognised as a real religion. There's no ten commandments for humans to follow to get into the AI's good books. Even if there was, digital copies of minds and a complete control over matter are much closer to the Hindu and Buddhist ideas of reincarnation than to the various Christian heaven/hell/purgatory combinations.

Even just upvotes and downvotes get me "karma", after all.

[0] I was raised Catholic to get me into a Catholic school, but that's not directly why I read it. My mum thought I wasn't reading enough, so paid me more pocket money for reading more books, and it just happened to be available.

[1] Christ/Yahweh and 12,000 from each of the 12 tribes of Israel, the latter of which makes a lot more sense when considering that the thing was at that point still an offshoot of Judaism, and had not yet developed a long-running antisemitic streak


The Revelation of St. John the Divine is taken as a true eschatology by a smallish and very novel heretical splinter sect, arising in a land known for its tendency to produce such wild-eyed, footless new religious movements, and which has had a lately outsized profile. Don't mistake that for the doctrine of any of the Eastern or Western orthodoxies, or indeed even any of the mainline Protestant denominations. And "divine," in this usage, is a countable synonym for "mystic."

You said yourself you weren't catechized in that stuff. Neither was I, and I was also raised by Catholics. Your catechists should have warned you off this crap when they noticed you getting into it, the way mine did, and told you not to presume to know the mind of God. Not that that helped me a few years later, but at least I knew enough then to take the happy-clappy foot-washing Southern Baptist stuff as cynically as it deserved.

It's hard to argue much else in direct response here, since that confusion seems so pervasive - not precisely a criticism, only I would rather we start from a true axiom. I guess I would say I also see the cosmic horror aspect of it - something Land knows he's doing and Bostrom doesn't, though for the latter this is only one case of his pervasive confusion of genres - but I don't really see much cause to credit with similar insight most people who actually cherish the belief.

(Note specially also the phrase 'semiotically indistinguishable,' which is unusual. One example might be that stories based intentionally closely on Campbell's 'monomyth' structure tend rarely to be very semiotically distinct from one another. Here of course we discuss not a thousand-faced hero but a Tausendjährige kingdom of God, but in both cases we see the same superficial variation in expression of a consistent animating myth.)


Frankly, this is wrong. The singularity is first of all, mischaracterized. The singularity is the point in time when AI has become smarter(however you may define it) than humans. The point is, we don't know what will happen beyond that point. It does not necessarily imply "God-like powers". It is the "not knowing" I think that is interesting.

Transhumanists simply believe in augmenting their bodies beyond their natural capabilities. The rest is just misrepresentation.

Me personally, I believe we should throw a wrench in the whole thing because the people who are in control are some of the worst people. Unless that changes, it would be a mistake to develop this technology further.


> Transhumanists simply believe in augmenting their bodies beyond their natural capabilities. The rest is just misrepresentation.

I have trouble with this characterization, because if you use it, then 99% of the human race is transhumanist, so the term loses any meaning. I’d argue the landscape isn't fenced to body modification, either, but that’s not really important. We’re clearly talking about something else here. Everybody knows it—don’t play coy.

As far as I’ve been able to discern over the years, a transhumanist advocates for the use of technology to improve human experience irrespective of the implications to humanity. And the way they rationalize this is with a specific set of beliefs that conveniently skirt issues of consciousness and soul. I am not saying they are wrong, they’re operating off arguable axioms, all of which may be end up being true. I’m saying they have to hold a set of beliefs in those axioms that are not shared by 99% of humans to rationalize their worldview.


I don't think this accurately defines transhumanism, at all.


Some related material that digs into the Zizian branch of these people https://maxread.substack.com/p/the-zizians-and-the-rationali...



Were they even transhumanists? There is a big crossover between the rationalists and transhumanists, but I don't think that is a given.


Transhumanism has an extremely rich literary history. Just grab any hard sci fi book from Greg Bear, his entire 21 century future history has transhumanists as part of the unspoken environment of normal, where transhumanists are just normal everyday people, despite being half or more composed of machinery.


What the transhumanist cultists don't realize is that Bear's novels, along with most of the transhumanist subgenre, are cautionary tales, not blueprints for a bright, shining future.


Same with so many things people fascinate on, in our short sighted culture.


This is only talking about a very specific and extreme type of transhumanist. Not acknowledging that makes the author sound to me like a crank with an axe to grind. (I'm choosing to ignore here the substantial an-LLM-wrote-this vibe I get.)

If you wear glasses or had braces or use an insulin pump or take Ritalin, then, congratulations, you're a transhumanist. The rest is just a matter of degrees.

As wizzwizz4 says in a sibling comment, the post is conflating TESCREAL and transhumanism.


Why does use of technology to improve one's life need an identity marker?


Because there are many people that (explicitly or implicitly) understand human body and mind as 'sacred' and would oppose intrusive technological improvements to that. So it makes sense to have a term for people who, on the other side, welcome such changes.

Note that i disagree with OP about glasses or insulin pump, as most people support use of technology to fix deficiencies of a human body or mind, but if one has a healthy, fit body/mind and still want technological improvements, that is a red line.


> there are many people that (explicitly or implicitly) understand human body and mind as 'sacred' and would oppose intrusive technological improvements to that. So it makes sense to have a term for people who, on the other side, welcome such changes.

That still defines it in relation to the non-improvers, which isn't the most compelling position. Why not, "we want to augment human potential to maximize quality of life through technology?"

(I worry it isn't that argument because it is still too human-centric.)


> but if one has a healthy, fit body/mind and still want technological improvements, that is a red line.

Thats a very blurry line that changes with culture, age, profession, hobbies etc. In 50 years that line will be elsewhere even within that boundary. Especially when this has nothing to do with religions since they were invented long before any of this became a topic (so much for universal truths but thats another topic).


It's identifying with the belief that we can AND should improve our lives.


Yes, but this sounds too (deliberately) naïve to me—like you’re trying to convince a kid that hard drugs are all harmless because caffeine is a drug. The dichotomy is not that Humanists refute technology while transhumanists embrace it.

The appropriate question to ask is, “what unique beliefs do transhumanists hold”. And the differentiating factor is generally a sentiment that the human condition must be overcome even at the cost of humanity. A transhumanist fundamentally rejects what it is to be human in line with a belief that humanity is inevitably deprecated. Humanist: technology augments humans. Transhumanist: technology replaces humans.

I’ve tried really hard and can’t seem to find a way to rationalize a transhumanist worldview—at least until we understand life and consciousness intimately and thoroughly. But even then I’d have to reevaluate, not take as a given.


The belief that using technology to improve one's life is a good idea is an ideology regardless of whether you like giving it a label.


I suppose watering down a definition until it's completely generic is one way to protect it from criticism.


Sure. And it is a valid response to the opposite tactic, of looking at the most concentrated and fringe.

It seems like asking the question "what does the typical transhumanist actually think?" is relevant.


It’s one thing to say “most of us don’t believe the extreme worldview those few crackpot members of our community are spewing, transhumanism is actually… <proceed to refute the whackos>”

It’s another thing to tacitly agree with the crackpots but deceptively present a more moderate stance to keep people naïve and initiate new members using an attenuated worldview.

Let’s hear it, “what does a typical transhumanist think?”


Do you think people are doing to second option? I dont think that transhumanists are a cohesive community in any meaningful sense, not any more than religious people, technologists, or environmentalists are a community.

There are tons of comments describing what typical transhumanists believe, but I think it boils down to the idea that humans can and should seek to modify themselves, and such a processes would be an improvement. In practical terms, this means they favor cybernetics and genetic modification.

Compare that with the loaded statement from the article's author talking about destiny and salvation

> transhumanists have deepened their belief in a fated future in which the human species will achieve “augmented” evolution through fusing with machines, leading to the emergence of an artificial superintelligence that will far outstrip all human knowledge and achieve God-like powers (The Singularity). This digital deity will lead us to a new era, in which all human biological limitations will be transcended; bringing about end to sickness, suffering and even death, and leading us to colonise the cosmos.


I think you’re giving transhumanism an elementary treatment. Everybody believes to some degree that it’s okay to “modify themselves” or, more generally, to apply technology to humans to achieve better outcomes. If you apply aloe vera to a sunburn, or install a hearing aid, you’re augmenting human facilities for a better outcome. And most relevant here: You don’t need a religious movement to adopt that worldview. Corollary: supporting hearing aid use doesn't make you a transhumanist in any actual way (even if there are a few THs out there that would argue that’s all it takes, for then everyone is a TH so nobody is).

Again, an average transhumanist does not “simply believe” that they need a movement and religion achieve human augmentation. Because that is our societal resting state. Or if they do they are blissfully naïve.

So now that we’re past that, if you actually discuss the subject matter at any depth, you’ll quickly get to the fundamental question and learn what makes a transhumanist different from everyday humans that have been augmenting themselves with technology for millennia: what is a human? What is a species?

A transhumanist embraces the singularity, for example, because they are not concerned with losing our humanity. (Because if one were concerned then it would be inconsistent for one to embrace an event where humans are obsoleted.) For a transhumanist it’s okay for the human race to become extinct provided we witness the coming of a more powerful consciousness.

More practically, what makes the discussion even possible is that we don’t really know what makes us human. If we did people could either take clear sides and/or you could clearly protect the essence of human through arbitrary technological augmentation.

I believe that until we identify what makes our conscious experience uniquely possible, we must protect the system that gives rise to it: our species. Until humans have the knowledge of god, humans shall not play god. A transhumanist would use my own argument against me and say that humans _are and always have been_ the product of augmentation, so come what may.

Make sense?

From my reading of TFA the author’s statement is not radical and indeed rather accurate. I am pretty sure they have a fairly complete understanding of the landscape.


The author may have a complete understanding of the landscape, but they did not strike me as honest. I found them deceptive, misleading, and manipulative instead. That is to say, I dont think they presented an accurate picture of the world, and did so intentionally, conflating religious metaphors throughout and playing up emotional tropes throughout.

I think a more accurate phrasing is that a transhumanist would not consider change to be the same as extinction or death.

They aren't worrying about losing some unique but undefinable essence, but assume any valuable essence would be carried along. I dont think they view change in the US vs them paradigm. Instead, it is US and US with differences.

They would say god gave us the tools to make children and alter ourselves, so doing so is not playing god.


I am really receptive of transhumanist goals. I think it’d be cool as shit to upload my consciousness into the internet and download it into a robot. I’m not worried about an AI super event killing us all. Mechanical wombs seem like they’d solve a lot of the tension around biologically assigned sex. Etc.

But you have to admit, and you did as much in your comment, it’s a pretty huge assumption right now that anything we do would innately carry our souls forward. I’m not willing to just assume. I want certainty and proof. I am sure there are some more extreme humanists out there who are more conservative than me, but all I’m asking is that as we apply technology we remain reverent of our souls. This seems like such a simple almost pedantic thing to be hung up on, but indeed it is exactly what distinguishes a humanist and transhumanist. The burden of proof is on you (royally, rhetorically) to prove that we’ll still have souls after uploading our consciousness to a machine.

And, logically, if you prove human continuity then it’s not trans anymore. Transhumanists don’t have a fundamental reverence for humanity, definitionally. They advocate for the application of technology without the assurance that our humanity will remain in tact. They hold an axiomatic belief that we humans are capable of transferring our species into a digital media. A belief that transferring a consciousness to a robot also retains the soul. And that replication of transhuman entities creates new souls. Etc.

I support research and exploration of these topics. But I’m not going to stake humanity on some belief that this is the future or even that it’s assumed to be possible.

PS most all cultures with the notion of god reserve god the right to bestow the essence of life into the biological human. Your final statement does not align with my understanding of any Christian religion out there in the assertion that since god gave us the tools to create humans biologically it’s appropriate for humans to create life spiritually. I don’t necessarily hold that to be ultimately true, but I do hold a reverence for life until we actually understand it. We currently have no idea what constitutes human life, whether our soul is the manifestation of our synapses quantum entangled to a substrate we have yet to identify, or entirely emergent from more simple mechanical biology. We need to answer that first before we can suppose to upload our species into some digital medium.


Im actually one of the humanists that are more conservative on the topic than you. I just don't think the article treated the reader and subject with honesty and respect.

I don't think transhumanists believe in a supernatural spirit or soul, but that the human spirit, or at least the good part, isn't tied to biology.

For myself, I'm skeptical of the entire utilitarian hedonist project, which transhumanism rests on. I think there are human values that are higher than pleasure and happiness. Values like love, honor, duty, compassion, and sacrifice. I think the biological human condition could be favorable for these in a way that some virtual bliss-state is not.

I don't need cherry picked scare quotes to disagree with aspects of transhumanism. I would rather debate the strongest case for transhumanism, not a weak strawman.


I think the term posthuman captures things concisely. Most transhumanists seek a posthuman future. Maybe the author's word soup wasn't fully charitable, but I understood the sentiment. I do think given that the author is exploring whether transhumanist movement constitutes a cult, it's maybe fair to pull on some of the more cult-like endgame that does exist in sub-genres. Agree/disagree.

Anyway I do agree that the human condition gives rise to favorable qualities in individuals and the larger population. I don't support utilitarian hedonism either. I will say my experience discussing transhumanism has usually been in terms of becoming posthuman and the notion of a virtual bliss-state, if mentioned, is more if an intriguing a side-quest, but we might just have different exposures there.

I mean I still think there are problems with becoming post human even if you remove the utilitarian hedonism from the mix, I guess. In the author's defense this is a difficult topic to articulate.


Transhumanism as an ideology is about much more than "use of technology to improve one's life". It basically amounts to the belief that any technology that improves one's life should be accepted and even sought after, and that any opposition to this is a bad idea. For example, a transhumanist would believe that human cloning for organ harvesting would be a good idea, and opposes the current laws against research into human cloning. This is in no way a foregone conclusion for anyone who wears glasses.


Saying that anyone who wears glasses is associated with these beliefs is indicative of drinking the kool-aid.


I'm saying that the expression of beliefs presented here is more than transhumanism.


I don’t think this is accurate at all. Transhumanism is a pretty clearly defined group with specific moral beliefs that not everyone agrees with. It’s like saying if you’ve ever forgiven someone when they did something bad to you, you’re a Christian.


You've completely missed the argument and tried to turn it into a No True Scotsman.

Transhumanism is not about human augmentation, it's about a misty-eyed view of human augmentation experienced through a haze of hand-me-down religiosity.

I spent some time following these people in the early 00s and they were talking about replacing nerves with Cat 5 - and other obvious nonsense. Complete lunacy.

It's a fundamentalist millenarian movement which is inherently faith-based, contemptuous of physical and psychological reality, and irrational. As the article says - it's just another in a long, long cycle of similar movements with similar dynamics.

The only thing that changes is the set dressing. The play is always the same, and always ends in horror and tragedy.

No, this time will not be different. Affirming that it will be is part of the sales pitch. But somehow - for reasons that are unsurprising to anyone who knows a little psychology - that never quite seems to happen.


And yet, your comment looks to me like a great temple, whose bricks are made of false equivalences and bound with ad hominem in place of mortar; a temple so large it casts shadow over the entire land.

Yes, for some, transhumanism is basically millenarianism with different set dressing. And yes, some of the things transhumanists say pattern-match to popular religious themes, especially if you're willing to bend or simplify some things a bit to force the analogy. But that doesn't mean that those transhumanist ideas and predictions actually are religious in nature, nor does it mean that most people agreeing with them are just operating on faith. And it especially doesn't mean that "the play is always the same, and always ends in horror and tragedy" (like what does that even mean, where does it ever apply?).

(Also, some critics have no sensitivity to jokes, or purposefully ignore humor. Replacing nerves with Cat 5, really?)

I don't deny there is a thick shell of fraudsters and pseudoscientific entertainment around this. I've seen that, and was disgusted by it, but unfortunately it happens for everything. This one is a real pattern - specific ideology is the set dressing, the underlying phenomena is just a bunch of scoundrels selling outlandish ideas to naive people. But that's not actual transhumanism, not any more than New Age bullshit about quantum this and magnetic that makes physics itself a faith-based movement.

I wish people would engage with specific ideas coming from transhumanism (and adjacent fields/communities), instead of trying to dismiss them with "your other ideas look like repainted Christianity, therefore they're wrong, you're wrong, and whatever you're saying here must be wrong too".


No, wearing glasses doesn't make you the adept of any ideological current, such as transhumanism. At best you could say it precludes you from being an adept of certain ideologies that would oppose wearing glasses, such as very strict "naturalism" (i.e. people who believe it's important to only eat or wear all-natural products), if you want to be self-consistent.

You could also say that wearing glasses or having an insulin pump installed makes you a transhuman, but it can't make you a transhumanist. And plenty of people are perfectly happy to be slightly ideologically inconsistent, or follow more nuanced ideologies, where, say, certain artificial enhancements are allowed and others are not. After all, even the biggest anti-drug crusaders accept coffee or at least tea as acceptable substances.


Transhumanism is the belief that we can and should do things to make our bodies work better. Full stop. End of ideology. That includes believing that wearing glasses is a good idea worth pursuing.


Do you believe that humans improving themselves or their society through technological advancement is always a good thing? Or do you think that glasses are a good technological advancement, but maybe nuclear weapons or a theoretical totalitarian AI-ran government are not good advancements in technology?


> Do you believe that ... improving ... is always a good thing?

"Improving" is good by definition. Otherwise it would be called something else. The questions become what you personally consider to be improvement and what scales and scopes you personally choose to consider when determining whether a change improves things.


I don’t think you fundamentally understand what makes a transhumanist. The differentiating line is not improve vs harm. Everyone wants to improve life with tech save maybe the Amish. It’s human vs not. A transhumanist is willing to entertain technology that enhances (carefully chosen term) some conscious experience even if it replaces our humanity. Others generally don’t take that stance axiomatically. A transhumanist would support replacing our dna with nanobots programmed to do things that keep a body alive if it means we can reduce the replication error rate and avoid cancer. Good outcome, dubious means. A transhumanist wouldn’t debate this and accept the nanobot outcome. Most others would at least debate this tech, even if we ultimately collectively come to the shared conclusion we can humanely replace parts of our DNA to cure cancer. The point is the transhumanist wouldn’t care about retaining our humanity, as evolving into “machines” is an acceptable outcome.


Then we need a moral belief system to determine what improvement is and isn’t. This will eventually lead to a codified set of beliefs that look a lot like religion. And I don’t really like most of the moral statements I’ve heard from self proclaimed transhumanists. That’s why I’m not a one of them, even though I wear glasses.


It could look a lot like religion, but also a lot like philosophy, science, or ethics.

Making value judgments and projecting them into the future is fundamental to conscious being operating in a causal world.

Beliefs like "dont murder people" show up in religion, but that is far from the only belief set that supports the recommendation.


Strange how a number of Transhumanists are claiming significantly more than you list here as their ideology.


Why do you think it's strange that people don't describe themselves with precision? People can be transhumanist and also other things. That doesn't make the other things part of transhumanism.


That is not what I said though. I said that a lot of people who claim they are transumanist claim that transhumanism is those things. Which is not the same as what you are describing.


> Transhumanism is the belief that we can and should do things to make our bodies work better. Full stop.

Instead of a benign example like eyeglasses, let's say I murder someone so that I can harvest his organs and replace my failing ones with his. According to you that makes me a transhumanist.


Desiring to replace your failing organs, yes, but how you get the organs is a completely orthogonal moral issue. To wit, approximately 100% of all transhumanists are not murderers.


No, you've just got the causality backwards. It's like saying "socialism is the ideology that workers should own the means of production. That literally includes being a worker at a company, not an owner.", and then concluding that anyone who works for a company rather than owning one is a socialist.

That is, wearing glasses is consistent with transhumanism, absolutely true. But one can wear glasses for other reasons, and even oppose transhumanism and still wear glasses, while still being perfectly self-consistent.

For example, believers of the Jehovah's Witnesses cult (or sect if you prefer) don't accept blood transfusions, but still wear glasses. They believe that certain modifications of your body and its abilities are acceptable, and others are not, and that this decision is not what makes our bodies work better, but based on what some god allows his followers to do. So, they are not transhumanists, even when happily wearing glasses.


> If you wear glasses or had braces or use an insulin pump, then, congratulations, you're a transhumanist. The rest is just a matter of degrees.

Glasses restore a function that was supposed to be there in the first place. They are called "corrective lenses" for a reason; to correct is to presuppose a defect which must be corrected to restore what is normative. Insulin pumps also restore a function that was supposed to be there in the first place. That it is implanted into the body is totally irrelevant. As with glasses, the relationship between the person and the pump is instrumental. My heart or brain, however, are not my instruments. They are part of me.

However, the essence of transhumanism, and the ultimate source of issues, is that it turns the human being into an engineering project that seeks to transcend the human. Metaphysically, this is, as polite academic philosophers might say, problematic. To transcend presupposes an existing limit inherent to the thing by virtue of its nature. You wouldn't say that adding a new video card to your computer or building an additional floor on a building transcends the original computer or the original building. This was always possible, as these things are effectively aggregates. The mechanistic metaphysics here precludes the possibility of transcendence, because all you can ever really do is rearrange the furniture. You also wouldn't say that adulthood is the transcendence of childhood, or that adaptation is some kind of transcendence of the species. You're still operating within the scope of possibilities of the species. The nature of a thing is what circumscribes its limits. Technology is not able to transcend human nature, because technology is already within the scope of human power. You can no more cause your own transcendence than you can pull yourself up by your own bootstraps.

Transcendence is not merely the actualization of the potentials already present within a thing. It entails the intrinsic "expansion" of the powers of a thing beyond what is possible by virtue of the nature of a thing. However, any modification of a human being can only be either instrumental, the introduction of defect, or restorative. In the best case, we're talking medicine and health. In the worst case, dehumanization, objectification, and commodification of human beings.


> Glasses restore a function that was supposed to be there in the first place.

> to correct is to presuppose a defect which must be corrected to restore what is normative

This prescribed bondage to normativity is a manifestation of ableism. Existence has no concept of supposed to be. Some people can see, and some people cannot. Some people can walk, and some people cannot. That survival in nature is easier for those who can isn't a value assessment. That society is structured to cater to some of those people more than others isn't an inherent moral good.


I agree with the mistakenly conflated definitions, and that this is talking about something more specific than transhumanism.

The tools you mention don't alter the species itself. Once we start heavily GMing our offspring (does anyone think this won't take hold culturally in the next 4 or 5 generations?) those folks will start to become transhuman.


I'd say the idea of humans genetically modifying ourselves in any significant fashion is like the Victorians thinking the skies would be full of airships in the year 2000:

Before we get to the point that we can do that, a disaster and/or a better alternative will mean very few people even want to do it.


> This is only talking about a very specific and extreme type of transhumanist.

The one with the tendencies to treat it as a cult? Isn't that the point of the article?

I am not really sure what point you're trying to make. It's like everytime I critique capitalism, someone appears and corrects me that what we have is actually a "crony capitalism", and the real good capitalism is somewhere in a mystical faraway place. Why insist on correcting the name then?


If you used a wheel to move something heavy from one place to another, you're a transhuman.

If you used fire to disinfect your recently caught prey, you're a transhumanist.

If you read a book ..

The only conclusion to be made from this article, is that humans have been on a transformation treadmill for a long, long time - long enough that we can no longer recognize transformation when we see it, or we are so tired of acknowledging it that we just don't any more, or such that we need to see it before we believe it is happening - i.e. next years new iPhone purchase, etc.


> The Tranhumanist Cult Test

Wait ... isn't it "transhumanist"? This typo appears once more in the article, just enough to thwart efficient computer content searches.


I can't take seriously any written document with a typo in the first words, and of the title nonetheless. Literally the first thing that came to my mind after I read the title was "tranhumanism" must be a new concept unrelated to transhumanism.


I remember meeting a few transhumanists when I lived in Silicon Valley. Whenever they would go into their schpeal, I'd respond with: "Well, if you go to church every Sunday and behave, you'll go to heaven."

It would catch them off guard, and then I would point out that they have faith in transhumanism like Christians have faith in Jesus. There would be a quick "get it" facial expression, then we'd both laugh. The transhumanists who I encountered were opened-minded enough to realize the difference between faith and reason.

I'd then point out that the Egyptians had similar beliefs, and say that if you freeze your body, or head, at death; most likely someone would use it for some strange science experiment in the future.

That being said, if I was wealthy, I'd love to freeze my body. I'm sure it'll come in handy for someone at some point, just like Egyptian mummies did.


I think we will live longer, and even achieve suspended animation. But a brain deprived of oxygen is damaged way too fast be useful. But who knows, humanity in 10,000 years will be very different from now, if we survive that long as a species.


Except Christianity would find transhumanism exceedingly bland, for being an unoriginal repeat of the Original Sin; the latest scheme of humanity to be a God.

“Now the serpent was more crafty than any of the wild animals the Lord God had made. He said to the woman, “Did God really say, ‘You must not eat from any tree in the garden’?

The woman said to the serpent, “We may eat fruit from the trees in the garden, but God did say, ‘You must not eat fruit from the tree that is in the middle of the garden, and you must not touch it, or you will die.’”

“You will not certainly die,” the serpent said to the woman. “For God knows that when you eat from it your eyes will be opened, and you will be like God, knowing good and evil.”


> Except Christianity would find transhumanism exceedingly bland, for being an unoriginal repeat of the Original Sin; the latest scheme of humanity to be a God.

Well God is an engineer and he gave us technology & curiosity as his greatest gift. Being aware of our own mortality both as individuals & as a species is what drives us to to try and beat it.

To be fair, blame the serpent (or Prometheus).


For true Christian believers, of any sect, mortal life is just a temporary step on your way to an eternal life, hopefully one spent next to their god in heaven. Prolonging mortal life is not and cannot be a goal in itself for anyone who deeply and honestly believes that they have eternity in heaven to look forward to. Sure, technology can be seen as a gift from the Christian god, but using technology to prolong life is fundamentally unnecessary.


> I would point out that they have faith in transhumanism like Christians have faith in Jesus

That's just wrong. Pacemakers are a form of transhumanism, improving the human condition through technology. When you comment on the difference between faith and reason, say transhumanism is based in faith, you're implying it's not reasonable.

Even if you're talking specifically of those that support cryonics like Alcor, or hope for mind uploading your statement is still flawed. Most don't have an absolute belief these technologies will come about, but hope they do and work towards bringing about that future.

Maybe the word you're looking for is hope. Transhumanists hope for a better future in the same way Christians hope they'll go to heaven.


I now know what it is, that alone is a reason to read the article. But has other interesting info.

>Transhumanism first emerged in Silicon Valley in the 1990s

That alone give me pause over this movement :)


Silicon Valley: boldly re-creating religion while claiming it is something entirely new and innovative.

Yep, we've seen this before.


The article basically builds up a straw man definition, which makes the whole thing rather worthless.

Wikipedia (https://en.wikipedia.org/wiki/Transhumanism) gives a much better overview of what the movement is and its history; the "came about in the 1990s in silicon valley" is just wrong, to start with.


Cult: any organization formed that leads to the leader having more sexual partners than societal norms allow. E.g., Ayn Rand, David Koresh

Its not difficult...


This is describing TESCREAL, not transhumanism.


Nitpicking like this sounds like something a cult member would say to make a Motte-and-bailey argument.

“Don’t worry, I’m not one of those silly TESCREALs, I’m just a transhumanist!”

What does this distinction add to or take away from the thrust of the analysis?


I don't think it's a nitpick. Wikipedia has

>Transhumanism is a philosophical and intellectual movement that advocates the enhancement of the human condition by developing and making widely available new and future technologies that can greatly enhance longevity, cognition, and well-being

Which isn't a cult, just tech to live better etc. Making out it's something else makes the argument a bit meaningless.


You can be transhumanist (want to improve humans with technology) without all the rest of what the article is refering to. It's just conflating way too much.

I think that transhumanism (the way that you'd find on wikipedia) is a noble pursuit and what article describes is a bunch of hogwash that I'd never hear of outside of anglosphere.


ctrl f musk "16 times" ctrl w


Um. The Singularity isn't religious in its origin. It's literally a reference to a mathematical singularity.

I'm not entirely opposed to the article's characterization, but this is a big one to get wrong. What the term has become in its pseudo-cult modern context is entirely divorced from what it came out of.


The Singularity is a belief that you can extrapolate from history and current events to a specific point in the future that will lead to some very religious sounding results:

* Eternal life

* A powerful entity or entities who will fix whatever is wrong

- An end to poverty

- An end to disease

* Personal Freedom

This is entirely a faith based belief that can not be proven or disproven. It is by no means certain that you can extrapolate to a singularity in our future. Nor is it certain that you can assume it will have the claimed effects. While the term singularity in math and physics has a well defined term with clear non-religious meanings. "The Singularity" is entirely a faith based religious belief.


I'm not sure where you get that version of The Singularity? Wikipedia has the most popular version of the technological singularity is AI making better AI leading to an intelligence explosion which may or may not happen but is not especially faith based or religious. (I think we'll have AI making better AI but more of a gradual ramp up than an explosion).


> The Singularity is a belief that you can extrapolate from history and current events

That's exactly backwards; a singularity is a point past which you can't extrapolate, because trying to do so leads to absurdities (e.g. infinite densities, time as a spacial dimension, one egg costing more than the GDP of Europe, etc.) "The Singularity" was called specifically that because it was the point at which historical projections would become absurd, and thus we could not meaningfully forecast past it (like an event horizon).

Some people ran with that and decided that it was a forecast that specific absurdities would certainly happen, but that's mostly just a reading comprehension issue.


I think the idea is that you can extrapolate that a singularity will occur and when it’s likely to happen. It’s what happens next that you can’t predict.


The singularity in this context refers to the point beyond which predictions will fail because we cannot possibly foresee the consequences of certain technological changes.

There's nothing historical about it; it came about as a result of a few different science writers looking into the future and wondering how we keep up in an accelerating technological context.

I actually agree that it's become something else, but the origin of the term was what I was correcting, and its origin isn't something woo-woo, it's firmly based in scientific speculation.


Unfortunately as someone of the christian faith I have first hand experience that you can not control someone else's use of language. Whether that is the meaning it was originally intended to convey or not that is the meaning that it conveys now. The best you can do here is to say: "That does not represent my own personal definition despite the zeitgeist coopting it to mean something else".


Even in its original context, it's still a bunch of woo-woo. The core idea, that creating a technology that can improve technology will lead to exponential technological advancement that can be modeled as a mathematical singularity, is very hand-wavy and silly on its face.


It's been very easy to observe acceleration in progress over time, and there's a natural question that emerges: Will we reach a point where people can't keep up?

Nothing hand-wavy or silly there. And the discussion of the topic as it was formed in the second half of the 20th century was pretty carefully couched in terms of what-ifs and conservative projection.


I don't know; there was definitely an apocalyptic quality to the Singularity in Marooned In Real Time, and superintelligences in A Fire Upon the Deep were literally referred to as Gods (there's a brief reference to Applied Theology in there, IIRC.) So you could argue that there was at least an unconscious parallel to religion in the way Vernor Vinge saw the Singularity.


Here's the Vinge paper most folks cite when talking about The Singularity.

https://edoras.sdsu.edu/~vinge/misc/singularity.html

There's a lot of speculation in there, but it's pretty carefully couched in scientific terms, and particularly in what-ifs that satisfy even the most stringent demands for conservative thinking.


That's hardly an unconscious parallel. That's a direct, atheist take.

The whole attempt at casting transhumanism as religion is silly. Transhumanist side says, "there's no supernatural God, but sufficiently advanced technology could approximate it" - and then people turn around and say, "you see, you say enough technology is like a god, therefore you're religious and believe in God (specifically in Christian God per mainstream religion, with all its mores)".

<facepalm>

Like X != X.


Religion and theism are orthogonal concepts. You can be an atheist and religious at the same time - many Buddhist sects are entirely atheistic, for example. Many cults are also atheistic.

A good example of atheistic, even materialistic, religion was the official doctrine of the USSR, "marxist dialectical materialism". This had all of the hallmarks of a religion, complete with religious texts, priests, condemnation of non-believers, and so on - but caked in fully materialistic and dogmatically atheist concepts.


In this sense, free market maximalism is a materialistic religion of its own.

Still, this is not the framing that people calling transhumanism a religion use. The arguments are usually trying to show parallels between transhumanist beliefs/theories and theistic religious beliefs (almost always Christianity) - specifically to say, "look, it sounds like Christianity if you substitute AI for God and Singularity for Rapture, therefore this is just rebranded Christianity, not Science", therefore implying the transhumanists are wrong. The argument is, literally, "this pattern-matches to religion therefore must be just as wrong", and I'm saying it's a dumb argument.


More than that, "singularity" isn't even an exciting term. It is a boundary beyond which the model being used obviously cannot predict behavior.

I personally think the singularity happened back in the 90s and its just been very disappointing instead of the rapture that was imagined.


You could put it anywhere you like; our ability to forecast the future has been rubbish for thousands of years. Some of my favorites: 2019, 2000, 1992, 1969, 1928, 1900, mid 1700s, ~400 BCE, ~5000 BP, ~11000 BPE.


This is why I tend to phrase it as an "event horizon" rather than a "singularity" (well, this and to avoid the baggage).

There's always been some stuff we can't forecast, and some other stuff we can forecast kinda OK-ish. If you'd asked people from 11000 BPE if the sun would still be rising and crops would still be grown today, I suspect the biggest difficulty would be finding someone who really understood the meaning of the number used to express the number of years between then and now — even a "big city" back then would have been order of 1k-2k people.

But I say "sun" and "crops" because of what I expect to be the limits of their imagination. We see such limits even today — even on Hacker News, look at threads about AI taking over the world, and you will see people discussing AI as if it can only be in the form of existing current LLMs.

So, the question to ask is: how far into the future would you have to take someone from each of those years, before they're shocked by what they encounter? That's my "event horizon", because it's the horizon beyond which you cannot see, even if that horizon is one of inability to predict what specific thing will be invented rather than the general trend-lines going infinite.

The time-gap to a shocking new invention decreases as the rate of progress increases.

Obviously how much some person will be able to forecast and hence not be surprised by, will depend on how much attention they pay to technological developments — and I've seen software developers surprised by Google Translate having an AR mode a decade after the tech was first demonstrated — but there is an upper limit even for people who obsessively consume all public information about a topic: if you're reading about it when you go to sleep, will you be surprised when you wake up by all the overnight developments?

Personally, I've tried making my own forecasts about the future, I'm fairly consistent for the last decade that my models stop making sense around 2032. But even with that consistency, I could only see ChatGPT coming when InstructGPT was shown off in a Two Minute Papers video (https://www.youtube.com/watch?app=desktop&v=PmxhCyhevMY&t=4m...) and the rate of change of Transformer and Diffusion models has continued to surprise me.

(I also didn't predict how severely YouTube would be stuffed with adverts, but that's more of a cultural shift — everyone gets all "kids these days" as they get older, why would I be any different?)


I agree. I believe in all those things, but in a sense that they seem inevitable, not as something I’m rooting for.

The singularity, distilled, is that if/when we get AGI or other world-changing technology, accurately predicting what happens next is impossible. As the rate of technology advance increases, the interval over which you can make reasonable predictions shortens until eventually what happens tomorrow is a role of the dice.

I believe in aliens in the same way: the universe is huge so I think they must exist because (small likelihood) x (huge number of chances) seems inevitable. I don’t think they’re flying over Idaho to terrorize farmers, though.


The more I hear about Transhumanism, the more I think that people in NorCal need some real problems to deal with.

EDIT: The tech industry went from resolving everyday problems ("how do I make spreadsheets more quickly", "how do I send this information to a person on the other side of the country") to trying to play God with AI.


For reasons I’ve never dug into, NorCal has a rich history of people banding together to do goofy things. Cults? Where better to start one?

(It’s one of the reasons I love it here. That same energy manifests amazing art, hackerspaces, meetups, and other fun things.)


> goofy things

When you have members of the movement referring to those who will be left behind as "median humans" (Altman) and making major changes to government that are to the detriment of society (Musk, to a lesser extent, Thiel and Andreessen), we're no longer "goofy". "Scary" is a better word.

The Flying Spaghetti Monster? That's goofy.


"Goofy" isn't benign to me. It's a statement of quality of the idea, not of the end result of it. For example, Jim Jones's cult was from here and I would describe it as goofy.


The history from wikipedia:

>The biologist Julian Huxley popularised the term "transhumanism" in a 1957 essay.[ The contemporary meaning of the term was foreshadowed by one of the first professors of futurology, a man who changed his name to FM-2030. In the 1960s, he taught "new concepts of the human" at The New School when he began to identify people who adopt technologies, lifestyles, and worldviews "transitional" to posthumanity as "transhuman".[7] The assertion laid the intellectual groundwork for the British philosopher Max More

Huxley was British, The New School was in NYC, More also British. I'm not sure you can pin it all on NorCal.


It's not the 1950s and 60s anymore. They're geographically clumped in one area. It's not NYC or the UK.


many of the weird singularity/transhumanist people do have real problems. lot of childhood trauma. lot of OCD. i guess if you call autism a mental illness now someone will jump out from behind a bush and correct you, but lots of mental health problems downstream of living with autism. if you read about the most extreme groups, people are in and out of mental hospitals with psychotic symptoms.

NorCal has been a feeding ground for cult leaders for 100+ years, but I actually think fewer real problems would be better. the people out there without real problems, just have GPT wrapper startups and go lie to people on podcasts, which is annoying, but better for them and everyone else.


People with trauma, neurodivergence, and the like are everywhere. The only place they can get involved in a culture of "let's build computer god" backed with the kind of money that Marc Andreessen, Peter Thiel, and Elon Musk can bring to the table is Northern California.

Perhaps it's the investors who need real problems. Usually that can be caused by having far, far less money.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: