Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah, this is like claiming lead ethyl in gas “hasn’t caused mass harm” in 1954.

There’s absolutely zero way to know that yet.



Except that lead's toxicity has been known for thousands of years.


Yes and the dangers of AI overlords have been known at least since the first Terminator movie /s.

Seriously though, people tend to dismiss things they are know are bad if there is no immediate harm. Certainly they had no idea how incredibly pervasive the lead from gasoline would be, that it would be detectable at harmful levels in basically everyone. It is also hard to measure these harms. How much crime of the 70s and 80s could be attributable for example.


Oh we instantly did know how bad it could get. General Motors,Standard Oil and DuPont knew for sure, they had industrial incidents and lied about them! US Surgeon General was bought, people with opposing opinions were sued.

It was instantly obvious to anyone that using leaded gas esp. in agriculture would be deadly and toxic. The risk was ignored for profit, PR handled, underplayed. Even the chronic effects were known!

We even had an early alternative of just adding more ethanol to the pettol. But no...

It took 50 years of scientists trying to convince public how deadly this thing is, with varied success. And 30 more years for the bulk of the phaseout.


Thanks, longer piece on that: https://www.bbc.com/news/business-40593353


> Oh we instantly did know how bad it could get.

Who are “we”?

The same could be told about AI - content farms, generating spam that bypass spam filter, etc…

Some people knew them, majority of the public don’t


The Public, which is not an entity, also knew enough about lead. But they were not told the gas contained lead for the longest time. Ethyl was marketed as an antiknock additive explicitly dropping lead from the name for this very reason.

Then there was a big chunk of time taken out for WW2 where such considerations were pushed out. The issue returned later but then it was argued there was no alternative for years.


Lead toxicity was limited due to it not being sprayed about with fumes out of the exhaust of every vehicle, and then number of those vehicles being low. It got quickly somewhat regulated away. (But not 100%.)

AI toxicity... We don't even know the risks. There's potentially no limit to the damage. We have some examples (metrics making echo chambers, automated classifiers giving people wolf ticket for no reason, amplified evil biases in conversational AI, fake news and deepfake generation) and some guesses for now. We do not even know how to begin to regulate it.

It's not even the same ballpark. Internet would be a closer comparison and it's still wild west.


Same risk as hiring a troll farm now


Troll farm is limited number of people with limited capability. These can be tracked down and shut down, their enployers as well. The effects of a troll farm are relatively predictable which is why they're used. It's somewhat rare that anything really unpredictable results from small scale social engineering of this kind.


I must strongly disagree here about unpredictable results from social engineering. Vladmir Lenin was deliberately let through Germany because they figured he would just weaken the tsar. It is hard to get smaller than one person or more unpredictable in impact with many others.


I would argue social media has at least 10 years of known toxicity, it is so addictive and harmful to young people i dont get why its not prohibited like gambling or smoking..


The problem with measuring the "harm" of something like AI vs e.g. lead is that with lead there are direct first order chemical harmful effects that are indisputable. With AI, any harm will get expressed via complex social dynamics, which involve so many other variables as well.


Which can be written off as laziness or just-world fallaciousness! Even consequences like "first order chemical harmful effects" are mostly ignored by the great-great-grandparent's "economists, lawyers etc"; the way AI will help concentrate money and power in the already overstuffed upper classes will barely be acknowledged.


Okay, so for leaded petrol substitute mortgage backed securities. The person who invented those was not going “buahahah, fly my pretties, cause a global financial crisis”; they probably thought they were doing a _good, ultimately of benefit to society thing_. And yet…


And therefore we also fail as a society to effectively regulate various complex financial instruments before they actually cause an obvious problem.

My point is not that we should or should not regulate AI. It's just that I believe the causal link from certain types of AI usage to concrete harm for society is too complex for enough humans to band together and achieve consensus a priori on regulation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: