Speaking of fair shakes, here is a wise word of caution about the emerging, expanding use of computer programs to evaluate people in the justice system, whether at bail hearings, sentencings, or elsewhere.
The author is a former software engineer at Facebook who’s now studying law at Harvard. Her point isn’t that we shouldn’t use or consult these programs, but we should know what we’re getting into and proceed with caution. It’s troubling, for example, if we use programs that no one in the field fully understands—not judges, not lawyers, not probation—because the manufacturer won’t disclose a proprietary algorithm.
She says we turn to computers in part to control for our own biases, “[b]ut shifting the … responsibility to a computer doesn’t necessarily eliminate bias; it delegates and often compounds it.” That’s because these programs mimic the data we use to train them, so even the ones that accurately reflect our world will necessarily reflect our biases. Plus, they work on a feedback loop, so if they’re not constantly retrained, they lean in toward those biases and drift even further from reality and fairness. So they don’t just parrot our own biases; they amplify them. She saw this phenomenon time and again as a software engineer.
She agrees that algorithms can work for good. They’ve reportedly helped New Jersey reduce its pretrial jail population, for example.
But let’s proceed with caution, she says:
“Computers may be intelligent, but they are not wise. Everything they know, we taught them, and we taught them our biases. They are not going to un-learn them without transparency and corrective action by humans.”