Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> As an AI doomer, it would actually be pretty great if we could get this stuff locked away behind costly APIs and censorship.

That is literally the doom scenario for me, rich people get unlimited access to spam and misinformation tools while the lower class gets fucked.



Agreed. A single company dominating AGI could become highly dominant, and it might start to want to cut back humans in the loop (think it starts automating everything everywhere). The thing we should watch for is whether our civilization as a whole is maximizing for meaning and wellbeing of (sentient) beings, or just concentrating power and creating profit. We need to be wary, vigilant of megacorporations (and also corporations in general).

See also: https://www.lesswrong.com/posts/zdKrgxwhE5pTiDpDm/practical-...


A single company running AGI would suggest that something built by humans could control an AGI. That would actually be a great victory compared to the status quo. Then we'd just need to convince the CEO of that company or nationalize it. Right now, nothing built by humans can reliably control even the weak AI that we have.


All is this doomer-ing feels to be like it's missing a key piece of reflection - it operates under the assumption that we're not on track to destroy ourselves with or without AGI.

We have proliferated a cache capable of wiping out all life on earth.

One of the countries with such a cache is currently at war - and the last time powers of this nature were in such a territorial conflict things went very poorly.

Our institutions have become pathological in their pursuit of power and profit, to the point where the environment, other people, and the truth itself can all go get fucked so long as x gajillionare can buy a new yacht.

The planet's on a lot more fire than it used to be.

Police (the protect and serve kind) now, as a matter of course, own Mine Resistant Armored Personnel Carriers. This is not likely to cause the apocalypse, but it's not a great indicator that we're okay.

Maybe it's time for us to hand off the reins.


That we're on track to maybe destroy ourselves is not a good reason to destroy ourselves harder.


Not exactly what I meant; there is a nonzero chance that an AGI given authority over humanity would run it better. Granted, a flipped coin would run it better but that's kinda the size of it.


Right, and if we explicitly aimed for building a good AGI we could maybe get that chance higher than small.


For smaller values of doom. The one he's talking about is unaligned AGI doing to humans what humans did to Xerces blue.


LLMs will never be AGI


I see only two outcomes at this point. LLMs evolve into AGI or they evolve into something perceptually indistinguishable from AGI. Either way the result is the same and we’re just arguing semantics.


Explain how a language model can “evolve” into AGI.


It's like saying an 8086 will never be able to render photorealistic graphics in realtime. They fuel the investment in technology and research that will likely lead there.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: