Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"Virtue signalling"? Please. There are a lot of very smart experts on that signatory list who definitely don't need to, or care about, virtue signalling. Fine, ignore Musk's signature, but I don't think luminaries like Stuart Russell, Steve Wozniak, Jaan Tallinn or John Hopfield are doing this for "virtue signalling".

You can fairly argue that this will be ineffective, but a lot of experts in this field have real, "humanity ending" concerns about AI, and I think it's a bit of a cop out to say "Well, genie's out of the bottle, nothing we can do as we barrel towards an unknown and scary future." Even Sam Altman has been yelling about the need for AI regulation for a long time now.



> Even Sam Altman has been yelling about the need for AI regulation for a long time now.

That's regulation that makes it harder for any competition to show up


So tired of seeing this line parroted everywhere without much thought given to what it actually means. Yes, regulation can add a burdensome layer, and regulatory capture can be a real thing.

But regulations for things like nuclear power plants, banks, insurance companies, elevator manufacturers, etc. are real because society recognizes the grave harm that happens when there are no additional checks on the system. Nobody says "Oh, all those big nuclear power plants just want regulations as a guard against competition." Certainly lots of crypto companies have said that about the banking system, and we all saw how that ended...


You can simultaneously believe in the need for regulation while being skeptical of those calling for it to entrench their own positions, look what happened with SBF.


There’s a difference between demonstrating the belief of both ideas and dismissing one idea because the other is also true.

So fucking what if what’s his face want regulations for moats? It doesn’t detract for the real need of regulation.

It’s like if letting a baby fall to it’s death because if the main villain gets his hands on it he’ll get unlimited power


Sorry, can you rephrase that? I’m not sure I understand the point you’re trying to make.


I agree that regulation can be good (and many times probably is), but the kind of regulation pushed by OpenAI will probably not be the good kind. There is just a conflict of interest here.

When the incumbents _oppose_ regulation that's usually a much better sign


Big nuclear power plants are not the ones behind regulations. Big oil and carbon-based power plants and others are the ones that lobby for nuclear power plant regulations.


Yeah you have little proof of this really, it’s just speculation…


"Even Sam Altman"? "Especially Sam Altman", you mean?

While regulations might slightly impact OpenAI's bottom line, they can ultimately prove advantageous for large corporations like them by addressing their primary concern: the threat of competition. By raising barriers to entry, regulatory measures would help solidify OpenAI's market position and maintain its dominance.


There are plenty of bigger "human ending" concerns on the table right now than AI and we certainly aren't pausing anything for those.


Like what ? Climate change ? The EU just voted for petrol and diesel car ban. Are we really single-threaded ?


- Lack of representation in government means big companies fuck up the planet if it's profitable

- People are mostly incentivized to compete, not to cooperate

- Antibiotic resistance

- Clean water supply

- etc..


"Lack of representation in government means big companies run the world" - is precisely what we're trying to figure out here, no ?


Sorry, who? Future of life institute?


We are not, but this AI drama is also the ultimate "whataboutism."

- What about if AI becomes AGI (whatever that actually means, it's not even clear)?

- Well, if that DID happen soon, which we can't actually know, well, what about if it tried to kill us all? (why? who the fuck knows, maybe it will chat us to death).

Meanwhile there is a very real certainty of catastrophic environmental damage that will decimate future generations, if it doesn't actually cause us to go extinct. And what do we get? People hand wringing over this ultimate what if, rather than signing every public statement document they can find to try to get an actual intervention on climate destruction.

I'm not talking (oh in 10 years maybe we'll have more EVs) kind of intervention, more like, let's get every country in the world to be off oil and gas in 5 years, not just for EVs but for almost everything possible, and where not possible let's use carbon neutral biofuel.


In 2035. Maybe we can pause AI development in 2035?


We're so poorly multi-threaded, even addressing climate change has been horribly slow...


No, AI drives all the others in the long run. Others are speed bumps.


Plain, old fashioned historicism. It was wrong 100 years ago, it is wrong today still.


Climate change won't affect AI, it could just make things shit for a couple hundred years. AI could solve that. Nuclear war might impact AI, but probably only temporarily (assuming we survive) and a war isn't guaranteed. But AI affects: Everything humans read/watch/touch/influence. Forever. Including climate change and our odds of nuclear war. There's no way it doesn't and once it starts there's no way we can stop it forever. Any narrower view is a failure of imagination. The outcome of AI is the outcome of humanity for the rest of our time in the universe.


There is no need for "whataboutism". There are plenty of very similar missives and warnings against, for example, the dangers of climate inaction, and I rarely see people claiming that the signatories of the latest IPCC report are "virtue signaling".


Climate change is not even close to humanity ending. At max wipe out a few coastal cities. And even that is unlikely because those that screams 'climate change' the loudest has the most assets in coastal prime real estates. Humans will still be the apex predator of the planet even if there's human caused climate change catastrophe.

AI literally can end humanity, every single individual potentially. But definitely replace humans as the apex predator of the planet. It is also consistently voted the highest likelihood cause if humanity is to end in the next 100 years. https://riskfrontiers.com/insights/ranking-of-potential-caus...

We should stop the climate change fear mongering. Yeah we shouldn't burn fossil as if its consequence free. But New York and Santa Monica beach should've been under water 20 years ago if the climate alarmist are correct. That's a far cry from pretending it's some number 1 priority. It shouldn't be even close. Having climate to distract us from things that will actually end us is the dumbest own goal possible for our species.


It’s not just about sea level or temperature increase, it’s about humanity screwing all other life forms For instance we’ve lost about 50% of insects since 1970, how is this « fear mongering » ? It’s the nº1 tragedy, by far, and it’s currently happening, unlike hypothetical AI threats https://www.businessinsider.com/insect-apocalypse-ecosystem-...


The sorts of studies that proclaim loss of 50% of insects don't check out when looked at closely. As you might guess, counting insects is quite hard, doing so reliably over time is much harder still and then assigning causality harder yet again.


Could you please provide details/source ? I'd be very happy to learn that this 50% figure is wrong :)


It's not about insects specifically but this paper points out statistical problems in a very similar claims about vertebrates:

https://www.sfu.ca/biology2/rEEding/pdfs/Leung_et_al_Cluster...

But it's a common theme. These claims get attention and journalists don't check, so they proliferate. Dig in to any given claim and you'll find they're all built on statistical quicksand.


Based on our current trajectory the apex predator will be an antibiotic-resistant bacterial strain. Probably Acenitobacter baumanii.


We have long entered the realm of theology here with people really wanting to believe in the omnipotence of a certain tool (possibly even while some other, simpler things destroy them).

What for example is Tallinn's medium- to long-term predictive track record on social issues? On technological development? Anyone can be concerned and have genuine reasons for concern, but that doesn't mean the outcomes materialize.


Where the pause for self-driving cars? How many people have died from that relentless push versus ChatGPT? Very convenient and at the same time silly.


419 accidents involving self-driving (level 2 and 3), 18 deaths, 19 accidents with injury level unknown [0]. All deaths from level 2 vehicles. So being pessimistic, maybe 50 deaths from self-driving.

The people signing this are worried about AI that doesn't exist yet. No one died from nuclear weapons before they were invented.

[0]: https://www.slashgear.com/1202594/how-many-people-have-actua...


In other words, considering the annual overall traffic fatalities, they are very safe.


Do you have a story about how self-driving cars could lead to an x-risk?


I'm waiting for a convincing argument as to how LLMs and similar are an existential risk.

I'm all for pausing research on anything that seems to have any real chance of becoming an AGI or functioning in a way similar to one, but I don't see how even more advanced LLMs are going to get there. GPT4 and beyond might put the teens writing propaganda posts in Moldova out of jobs, but I the talk from some of the signatories about LLMs developing their own goals and planning on how to achieve them seems nonsensical when you look at how they actually function under the hood.


I think I generally understand the transformer architecture. Now, "developing their own goals", maybe that wouldn't make sense for LLMs alone, but "planning how to achieve [some goal]", seems somewhere between "seems like it could be done by adding on a small harness" and "don't they, in a sense, already do that?" .

Like, if you ask ChatGPT to come up with a plan for you for how to accomplish some task, I'm not saying it is like, great at doing this in general, but it can do this to some degree at least, and I don't see any clear limiting principle for "a transformer based model that produces text cannot do [X]" as far as planning-in-text goes.


[flagged]


Seriously, why do people do this? It's so useless and unhelpful.

Wozniak is just one of the people I mentioned, and as a tech luminary who is responsible for a lot of visionary tech that impacts our day-to-day, I think it makes sense to highlight his opinion, never mind that his name was sandwiched between some of the "founding fathers" of AI like Stuart Russell and John Hopfield.


Your post said very explicitly, "There are a lot of very smart experts on that signatory list" and then named Wozniak as an example of one of them. But Woz isn't an AI expert. It's entirely appropriate to point that out!





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: