Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Your argument is meaningless if you don't specify what threshold there should be for harm

Otherwise you also have to complain about the stifling of open source bioagent research, open source nuclear warheads, open source human cloning protocols

Those are also all dual-use technologies that are objectively morally neutral



Laws should be about the outcome, not about processes that may lead to an outcome. It is already illegal in California to produce your own nuclear weapon. Instead of outlawing books, because they allow research into building giant gundam robots, just outlaw giant gundam robots.


> Laws should be about the outcome, not about processes that may lead to an outcome

They have to be about both because outcomes aren’t predictable, and whether something is an intermediate or ultimate outcome isn’t always clear. We have a law requiring indicator use on lane change, not just hitting someone while lane changing, for example.


But even this example is a ban on a specific action: changing lanes without using a legally defined indicator with a specific amount of display time.

The equivalent would be if the law simply said, "don't change lanes unsafely" but didn't define it much beyond that, and left it to law enforcement and judges to decide, so anytime someone changed lanes "unsafely" there's now extremely unknown legal risk.


> even this example is a ban on a specific action: changing lanes without using a legally defined indicator

This is directly analogous to requiring disclosures and certifications be filed with the state. Those are actions as much as hitting an indicator.

I haven’t read the proposed bill closely. But it seems to be a standard rulemaking bill.


Laws also should be possible (preferably easy) to implement. Why does DMCA ban circumvention tools? Circumvention is already illegal and it is piracy that should be outlawed, not tools to enable piracy? The reason is piracy tools are considerably easier to regulate than piracy.


The DMCA ban on circumumvention has been both stunningly useless at discouraging piracy and effective at hurting normal users including such glorious stupidity as being used to prevent 3rd party ink cartridges.

Circumvention also absent the DMCA isn't illegal.


> Laws should be about the outcome, not about processes that may lead to an outcome.

Some outcomes are pretty terrible, I think there are valid instances where we might also want to prevent precursor technology from being widely disseminated to prevent them.


There are certainly types of data that are already prohibited for export and dissemination. In this case, I would argue no new law is needed, the existing laws cover the export or dissemination of dual use technologies. If the LLM becomes dual-use/export-restricted/etc because it was trained on export-restricted/sensitive/etc data, it is already illegal to disseminate it. Enforce the existing law, rather than use taxpayer money to ban and police private LLM training because this might happen.


> Otherwise you also have to complain about the stifling of open source bioagent research, open source nuclear warheads, open source human cloning protocols

No, actually you don’t.

This is just a slippery slope that suggests that any of these examples are even remotely comparable to AI. There is room for nuance and it’s easy to spot the outlier among bioagent research, nuclear warheads, human cloning, and generative artificial intelligence.


Unfortunately, I think you will see this differently in a few years, that AI is not an outlier (In the fortunate case where were there were enough "close calls" that we're still around to reflect on this question)

I hope I'm wrong


Unfortunately, I think

Maybe wait until you're sure before holding guns to peoples' heads.


we're talking about the mildest reporting requirements in bill SB-1047

(admittedly, I'm getting a bit motte-bailey here, but still)


Agree that artificial intelligence is an outlier. I think it is the technology with the greatest associated risk of all technologies humans have worked on.


That's because you don't understand it.


Please don’t.

It’s unhelpful to the argument when you do this, and it makes our side look like a bunch of smug self entitled assholes.

The reality is that AI is disruptive but we don’t know how disruptive.

The parent post is clearly hyperbole; but let’s push back on what is clearly nonsense (ie. AI being more dangerous than nuclear weapons) in a logical manner hm?

Understanding AI is not the issue here; the issue so that no one knows how disruptive it will eventually be; not me, not you, not them.

People are playing the risk mitigation game; but the point is that if you play it too hard you end up as a ludite in a cave with no lights because something might be dangerous about “electricity”.


I disagree. Debating gives legitimacy, especially when one begins to debate a throwaway comment that doesn't even put an argument forward. The right answer is outright dismissal.


nicely put


Clearly human cloning is the outlier. You know, innumerable twins exist.


> Those are also all dual-use technologies that are objectively morally neutral

nuclear warheads?


See the history of "peaceful nuclear explosions". The USA and USSR used a few nuclear warheads for civil engineering purposes. It seems crazy now.

https://en.wikipedia.org/wiki/Peaceful_nuclear_explosion


Most explosives are used for construction. That's where Nobel Prize came from.


Part of my open source Mars terraforming plan


Back to the good old plowshare, yes? https://en.wikipedia.org/wiki/Project_Plowshare




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: