Your argument is meaningless if you don't specify what threshold there should be for harm
Otherwise you also have to complain about the stifling of open source bioagent research, open source nuclear warheads, open source human cloning protocols
Those are also all dual-use technologies that are objectively morally neutral
Laws should be about the outcome, not about processes that may lead to an outcome. It is already illegal in California to produce your own nuclear weapon. Instead of outlawing books, because they allow research into building giant gundam robots, just outlaw giant gundam robots.
> Laws should be about the outcome, not about processes that may lead to an outcome
They have to be about both because outcomes aren’t predictable, and whether something is an intermediate or ultimate outcome isn’t always clear. We have a law requiring indicator use on lane change, not just hitting someone while lane changing, for example.
But even this example is a ban on a specific action: changing lanes without using a legally defined indicator with a specific amount of display time.
The equivalent would be if the law simply said, "don't change lanes unsafely" but didn't define it much beyond that, and left it to law enforcement and judges to decide, so anytime someone changed lanes "unsafely" there's now extremely unknown legal risk.
Laws also should be possible (preferably easy) to implement. Why does DMCA ban circumvention tools? Circumvention is already illegal and it is piracy that should be outlawed, not tools to enable piracy? The reason is piracy tools are considerably easier to regulate than piracy.
The DMCA ban on circumumvention has been both stunningly useless at discouraging piracy and effective at hurting normal users including such glorious stupidity as being used to prevent 3rd party ink cartridges.
> Laws should be about the outcome, not about processes that may lead to an outcome.
Some outcomes are pretty terrible, I think there are valid instances where we might also want to prevent precursor technology from being widely disseminated to prevent them.
There are certainly types of data that are already prohibited for export and dissemination. In this case, I would argue no new law is needed, the existing laws cover the export or dissemination of dual use technologies. If the LLM becomes dual-use/export-restricted/etc because it was trained on export-restricted/sensitive/etc data, it is already illegal to disseminate it. Enforce the existing law, rather than use taxpayer money to ban and police private LLM training because this might happen.
> Otherwise you also have to complain about the stifling of open source bioagent research, open source nuclear warheads, open source human cloning protocols
No, actually you don’t.
This is just a slippery slope that suggests that any of these examples are even remotely comparable to AI. There is room for nuance and it’s easy to spot the outlier among bioagent research, nuclear warheads, human cloning, and generative artificial intelligence.
Unfortunately, I think you will see this differently in a few years, that AI is not an outlier (In the fortunate case where were there were enough "close calls" that we're still around to reflect on this question)
Agree that artificial intelligence is an outlier. I think it is the technology with the greatest associated risk of all technologies humans have worked on.
It’s unhelpful to the argument when you do this, and it makes our side look like a bunch of smug self entitled assholes.
The reality is that AI is disruptive but we don’t know how disruptive.
The parent post is clearly hyperbole; but let’s push back on what is clearly nonsense (ie. AI being more dangerous than nuclear weapons) in a logical manner hm?
Understanding AI is not the issue here; the issue so that no one knows how disruptive it will eventually be; not me, not you, not them.
People are playing the risk mitigation game; but the point is that if you play it too hard you end up as a ludite in a cave with no lights because something might be dangerous about “electricity”.
I disagree. Debating gives legitimacy, especially when one begins to debate a throwaway comment that doesn't even put an argument forward. The right answer is outright dismissal.
Otherwise you also have to complain about the stifling of open source bioagent research, open source nuclear warheads, open source human cloning protocols
Those are also all dual-use technologies that are objectively morally neutral