Machine Learning Bias propublica.org

Julia Angwin and Jeff Larson of ProPublica:

We obtained the risk scores assigned to more than 7,000 people arrested in Broward County, Florida, in 2013 and 2014 and checked to see how many were charged with new crimes over the next two years, the same benchmark used by the creators of the algorithm.

The score proved remarkably unreliable in forecasting violent crime: Only 20 percent of the people predicted to commit violent crimes actually went on to do so.

When a full range of crimes were taken into account — including misdemeanors such as driving with an expired license — the algorithm was somewhat more accurate than a coin flip. Of those deemed likely to re-offend, 61 percent were arrested for any subsequent crimes within two years.

We also turned up significant racial disparities, just as Holder feared. In forecasting who would re-offend, the algorithm made mistakes with black and white defendants at roughly the same rate but in very different ways.

After collecting and assessing unprecedented amounts of data, we’re rapidly accelerating the rate at which we believe that computers can make decisions on our behalf. We’ve never before, in the whole of human history, had access to this much information, and we now believe that it can effectively tell us what to do. It’s happening on a smaller scale with virtual assistants and bots. But, while it’s a little irritating when they get a command wrong, it’s nothing on risk assessment scores, which can fuck up someone’s life.