Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> You're contradicting yourself here. Is it just 'intellectual amusement' if this technology is as disruptive as you claim?

Let me be more clear: individual programmers are improving AI for its intellectual amusement, but organizations use it for its disruptive powers. Two different groups of people, with a bit of an intersection.

MOreover:

> Just to play devil's advocate, if there really were clear signs of this impending distruction, there could be some sort of international agreement to halt progress. Realistically, this will never happen.

Of course not, are you joking? There are clear signs of climate destruction as well with CO2 levels rising. Did international agreements work there? Nope, no flattening in the CO2 curve yet.

We are fundamentally destructive species, who cannot see long-term problems if there is short-term gain. The only mechanism we have on a global scale to decide what to do is capitalistic motivations.



> Let me be more clear: individual programmers are improving AI for its intellectual amusement, but organizations use it for its disruptive powers. Two different groups of people, with a bit of an intersection.

That's clearer, but I still take issue with it. You can say the same about any software project or maybe even most work in general. As software engineers (I assume that's your profession too), we automate things that could have kept hundreds of people employed. There isn't that much different with AI - as long as there is money spent on the problem, there will be people willing to work on it, especially at the forefront of technolgy.

---

I agree with your second point. With that said, climate is way clearer distructive behavior, while also being more of a nuissance, a side effect of growth. AI has enormous potential and could, in the most optimistic outcomes, lead us to a utopia. Obviously, we all know that will not happen.

Also, the reason why AI progress will not be halted - we cannot allow our adversaries take the lead on this. It's really that simple.


> we automate things that could have kept hundreds of people employed

Well I do agree with that. But I think that should also be re-examined and we should automate less...


We can't unless the whole world agrees to. It's simply impractical to limit your own progress when you have foreign adversaries not doing the same.


Note that the adversaries don't have to be foreign. The same dynamics happen within domestic markets unless enforcement is ramped up to constrain the worst actors.


> We are fundamentally destructive species, who cannot see long-term problems if there is short-term gain. The only mechanism we have on a global scale to decide what to do is capitalistic motivations.

So if AI destroys us as a species and...

> There are clear signs of climate destruction as well with CO2 levels rising. Did international agreements work there? Nope, no flattening in the CO2 curve yet.

Seems like we're in good shape?

If the end game is destroying ourselves, looks like our long term "problems" are solved.


I hate this framing of humanity as a “fundamentally destructive species” it’s a meaningless sound that people make with zero serious thought given to it. Exactly which species are not destructive? All animal life must consume other life in order to survive and exactly zero of them have any inhibition that would prevent them from maximizing their consumption and reproduction at the expense of all other life if they could. Humans are the only species that cares at all what happens to other forms of life and makes efforts at our own expense to limit or temper our impact. The only reason the world around you seems even remotely safe and comfortable is due to thousands of years of sustained human effort to make it so.


Humans are the unique species that have created a highly rigid capitalistic system where the only reward for the differential survival of ideas is short-term profit. So other species do have the consumptive tendency but we are the only ones that found a ruthlessly efficient system for actualizing our consumption without bound.

That is why I used the word destructive instead of "having the tendency to destroy". All animals have that, but only we have actualized it. Hence we are destructive to a level that is unseen in other species.


The evolutionary record is piled high with the bones of extinct species, extinctions caused by changes in the environment or by other species. We are not unique, we are just one of many millions of species to find a way to rapidly outcompete others but the difference is that we often choose not to. So far we can’t even hold a candle to the humble cyanobacteria in terms of wanton destruction of their environment and all life on the planet when they evolved the ability to photosynthesize. Similar though less dramatic events have likely occurred with each major evolutionary adaptation that allows a species to exploit something not available to others. For us it’s intelligence but for others it was eyes, fins, teeth, legs, claws, etc. all leaving a path of destruction and allowing the possessors of such traits to multiply and differentiate until their unique attributes are now the common necessities for survival.


That is true, and it is a good point. But the rate at which we are causing the extinction is much faster, and we do it consciously, causing harm to millions of species including ourselves. But you do have a point, I'll acknowledge that. We are not much different than a plague or a massive infection. Nevertheless, the fact that we do have intelligence means we have the moral obligation not to destroy other life and NOT to destroy at the rate we are.


I mean, for CO2, the U.S. and the EU (who were once the largest emitters) have not only flattened the curve, but have in fact reduced emissions over the past 20 years:

https://ourworldindata.org/grapher/annual-co-emissions-by-re...

China has blown up emissions astronomically, though. To a lesser extent other Asian countries have as well.

I generally agree that international regulations controlling AI are unlikely to work, though, since it seems like it might be such a powerful and disruptive technology: if it doesn't stall, it could effectively be single-shot Prisoner's Dilemma, and when you have 193 players, someone's going to defect.

Personally though I think there are two possible outcomes:

1. Progress stalls, and it turns out getting from GPT-4-Turbo to better-than-human intelligence just doesn't pan out. LLMs are stuck as junior engineers for decades. If so, this is largely good for software engineers (and somewhat good for everyone, since it means we're more productive), but society doesn't change too much.

2. Progress doesn't stall, and we hit at least slightly-superhuman intelligence within the next decade. While this would obviously be a tough shift for most knowledge workers, especially depending on how quickly the shift happens, I also think this would likely bring about incredible medical advances, as well as incredible advances in robotics that reduce the cost of physical labor as well: meaning the price of goods drops enormously, and thanks to the medical advances we significantly increase either our lifespans, or at least the quality of our lives in old age, which seems quite positive. We'd need to figure out some sort of UBI system once the labor costs drop enough, but I think most people will be in favor of that, and also most stuff will just be really cheap at that point: ultimately just the cost of electricity (even "raw materials" are priced based on the cost of labor to get the materials, and the labor would be... the cost of electricity to run the robotics).

There are probably some in between scenarios, but TBH it's hard to see anything other than "stall" vs "takeoff" as being likely: either you never get past human intelligence (stall), or you do break through the wall, and then intelligence self-improves faster than before, up to some sort of information theory limit that I think is a lot higher than the average human is operating close to (consider just the variation in intelligence between individual human beings!).

Takeoff could also result in some sort of doomsday scenario, but the current LLMs haven't seemed to have the problems that the early doomers predicted, and so I think the like, humanity-enslaving or destroying outcomes are probably just not gonna happen.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: