Algorithms Were Supposed to Reduce Bias in Criminal Justice—Do They?
Algorithms were supposed to remake the American justice system. Championed as dispassionate, computer-driven calculations about risk, crime, and recidivism, their deployment in everything from policing to bail and sentencing to parole was meant to smooth out what are often unequal decisions made by fallible, biased humans.
But, so far, this hasn’t been the case.
“In theory, if the predictive algorithm is less biased than the decision-maker, that should lead to less incarceration of Black and Indigenous and other politically marginalized people. But algorithms can discriminate,” says Ngozi Okidegbe, Boston University’s Moorman-Simon Interdisciplinary Career Development Associate Professor of Law and an assistant professor of computing and data sciences. She’s the first at the University to hold a dual appointment straddling data and the law, and her scholarship dives into this intersection, examining how the use of predictive technologies in the criminal justice system impacts racially marginalized communities.