I didn't mean you were forcing them to approve it.
I meant that, over time, your system will become right more and more often, not by virtue of genuinely understanding the situations, but because it has more training data and comes up with correlations that happen to be correct.
Once it's mostly-kinda-reliable, your users will, over time, stop paying attention to it and just start trusting it, even if they don't mean to, because that's how humans work.
And then it becomes a rubberstamp, and outcomes start getting generated where no human really thought it through.
I meant that, over time, your system will become right more and more often, not by virtue of genuinely understanding the situations, but because it has more training data and comes up with correlations that happen to be correct.
Once it's mostly-kinda-reliable, your users will, over time, stop paying attention to it and just start trusting it, even if they don't mean to, because that's how humans work.
And then it becomes a rubberstamp, and outcomes start getting generated where no human really thought it through.