The OP is a bit too long-winded for my taste, but this passage struck a chord with me:
> If AI companies are allowed to market AI systems that are essentially black boxes, they could become the ultimate ends-justify-the-means devices. Before too long, we will not delegate decisions to AI systems because they perform better. Rather, we will delegate decisions to AI systems because they can get away with everything that we can’t. You’ve heard of money laundering? This is human-behavior laundering. At last—plausible deniability for everything.
Corporations routinely get caught doing things that are illegal, but employees rarely suffer any legal consequences because it proves really hard or impossible to pin responsibility on any one person. The responsibility is diffuse; it falls on the system, not on the individuals that work for it. AI will make it more so.
The entire point of corporations is diffuse legal liability. We already see this as well in the law no AI needed this is why is rare for an individual police officer, judge, or prosecutor to face any accountability. The system is designed to ensure no personal accountability via over lapping nearly impenetrable liability shields (aka immunity)
The mantra then becomes "we have a systemic problem", and no one can agree on either the actual problem nor the solution when in reality we have a liability issue.
This will only grow with AI, but the fact people do not see that we have already obliterated the rule of law long before the law was touched by AI, we have no hope of stopping the abuse by AI
That ñack of personal accountability would be no problem, though, if the corporate sanctions for lawbreaking had sufficient teeth. Corporations will always treat fines etc. as a cost of doing business; the cost should be high enough to deter lawbreaking.
While that is true, I have seen executives attitude change when a regulation held them personally liable, either economically or under threat of jail time, in those are taken far far more seriously then when the punishment is just a fine for the company which then is a math problem, is the fine more than compliance.
The police should absolutely not operate as anything like a corporation, it is a fundamental pillar of democracy, this should be absolutely accountable for any wrongdoing. What individual officers can get away with in the US is just completely insane.
I dont think I claimed they should, the juxtaposition was the parent comment about corporations and tying that in the wider concept of law.
Corporations are legal constructions to limit liability not only of investors but of employees of the corporation, that is their function in law
Similarly the law has other legal constructions to exempt and limit liability for other positions inside that system
So it not that police are operating as corporations, the same legal construction that created corporations also created police forces.
The root of the problem for both is the same (more on that in a second)
>>it is a fundamental pillar of democracy
I disagree here kinda, a fundamental pillar of democracy is equality under the law, the root of the problem for both police, corporations, politicians, etc is the ability for the law to make some people "more equal" than others... i.e not equal at all.
This is done by making police officers exempt from some laws, some regulations, some liability even if the city is not... this is done by making individuals exempt from regulations even if the corporation is not...
So you are correct that police should not be operated like a corporation, but at the same we can look at the law and see where there are treated similarly and see why that is a problem.
I will refrain from my general negativity about democracy and just say that for what you call democracy to hold everyone must be treated equal under the law... and I do mean everyone, and I do mean equal. No exemptions, no special privileges. Most political ideologies have a problem with true equality
A lot of crimes things require intent. Perhaps using AI could actually circumvent the intent requirement of crime, thus making a lot of otherwise illegal things totally legal.
That is to say, it not only makes it hard to pin responsibility, but actually makes it no longer a crime at all.
I think this is actually a good thing in the long run because “requiring intent” creates a clearly perverse incentive where the organization may be able to do illegal things so long as it can delude itself about them, for instance by keeping inaccurate books or allowing broken processes to remain broken because fixing them would shed light on something illegal.
Instead I think it would be better for organizations to be approximately as liable for their mistakes as their crimes. In that case it doesn’t matter if an employee does something illegal or an AI does some illegal on behalf of the company, the company will remain liable.
> I think this is actually a good thing in the long run because “requiring intent” creates a clearly perverse incentive where the organization may be able to do illegal things so long as it can delude itself about them
This doesn't really match how intent is handled in (at least American) law. There are reasonable-person tests. It is subpar, and so I agree with your second paragraph, but it isn't as cut and dry as the first paragraph suggests.
Intent/mens rea doesn't work like that unless the law explicitly specifies so. By default intent simply means intent to perform an act, as opposed to intent to cause harm. In some specific cases, like murder, intent to cause harm is an element, but that's the exception not the rule.
For example if you intentionally take someone's property, maybe you took their phone away because you genuinely thought their phone was causing them harm and wanted to help them, you have the mens rea for theft.
However, if you unintentionally took someone's phone, like you mistook someone else's phone for your phone, then you don't have the mens rea for theft.
We’ve had software making business decisions for decades now, so this really isn’t a new problem. When is comes to finance laws, intent doesn’t work like that, the onus is on the company and it’s employees to prevent the criminal activity. Lack of intent, or even knowledge, is not sufficient. You have a responsibility to implement processes that give you the knowledge you need.
I wrote the above before reading the article and wondered how much the author knows about corporate criminal and civil responsibility. I’ve worked for finance institutions so I’ve been through the training on this. A graphic designer. Right, I completely understand and appreciate the problems with generative AI. Those are points well taken.
I mean I’ve got nothing against graphic designers, and I’m not saying there are no risks with AI. There are many. But the risk assessment particularly in finance, and likely other business areas is based on a fundamental misunderstanding of the way regulations on this actually work already today.
When a company does something illegal, the employees are also punished in a "diffuse" manner, as a result of economic sanctions. When a big company has to pay a X million fine, it is X less million for paying employees, so less raises, maybe layoffs, etc... People who invested in that unlawful company will also lose money.
When it is not just money matters (like when people may die), we usually designate one person, and say something like "if people die, you go do jail, make sure it doesn't happen". Up to him to make sure he does what is necessary, including punishing employees who do not follow the rules.
I know some people who never pay for parking, public transport, or tolls, because in their case, the fine for not paying times the chance of getting caught is less than the ticket price.
On the contrary, the point is to correct for that fact that a flat-fee fine is a complete non-issue if your income is high enough. There have been times in my life when an accidental or unexpected $50 fine would impact my ability to pay rent or other essential bills that month. But now I'm not even sure I'd bother going to court to contest something that small because I just wouldn't care enough to spend the time or emotional investment.
If a person's income is low enough that probability of being fined * percentage of income per fine * income is less than price of parking, then it's economically viable for them not to pay for parking and risk the fine.
First, AI being a black box isn’t a legal shield. See Tesla Autopilot.
Second, corporations and employees have a specifically unique legal relationship but employers can be held criminally responsible for their actions if their actions are intentionally illegal. Often a corporations actions are illegal in the aggregate but no one person did anything illegal.
Eh, we’ve had black boxes for a while. It’s just a non-human “fall guy.”
The law deals with this with outcome tests: if the outcome would have been illegal had a person done it, then the person using the black box is liable.
Eg, if you hire on non-skill-based criteria, and the outcome is not similar to the racial distribution of the applicant pool, you’re liable for discriminatory hiring. This is a common problem with using personality-based hiring assessments.
One of the biggest problems with the concept of corporate personhood is that if a corporation commits a crime you can't just throw it in jail like a normal person.
At some point perhaps we should just consider the "death penalty" for corporations that break the law.
That or we start holding executives and/or shareholders accountable for the actions that happen in organisations they're supposed to control.
> If AI companies are allowed to market AI systems that are essentially black boxes, they could become the ultimate ends-justify-the-means devices. Before too long, we will not delegate decisions to AI systems because they perform better. Rather, we will delegate decisions to AI systems because they can get away with everything that we can’t. You’ve heard of money laundering? This is human-behavior laundering. At last—plausible deniability for everything.
Corporations routinely get caught doing things that are illegal, but employees rarely suffer any legal consequences because it proves really hard or impossible to pin responsibility on any one person. The responsibility is diffuse; it falls on the system, not on the individuals that work for it. AI will make it more so.