Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not necessarily. Interpretability of a system used to make decisions is more important in some contexts than others. For example, a black box AI used to make judiciary decisions would completely remove transparency from a system that requires careful oversight. It seems to me that the intent of the legislation is to avoid such cases from popping up, so that people can contest decisions made that would have a material impact on them, and that organisations can provide traceable reasoning.


Is a black box AI system less transparent than 12 jurors? It would seem anytime the system is human judgement, an AI system would be as transparent (or nearly so).

It would seem accountable would only be higher in systems where humans were not part of the decision making process.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: