Human Decisions and Machine Predictions
- Url
- QJE,
- NBER (working paper)
BibTeX
@article{kleinberg2018human,
title={Human decisions and machine predictions},
author={Kleinberg, Jon and Lakkaraju, Himabindu and Leskovec, Jure and Ludwig, Jens and Mullainathan, Sendhil},
journal={The quarterly journal of economics},
volume={133},
number={1},
pages={237--293},
year={2018},
publisher={Oxford University Press}
}
Abstract
Can machine learning improve human decision making? Bail decisions provide a good test case. Millions of times each year, judges make jail-or-release decisions that hinge on a prediction of what a defendant would do if released. The concreteness of the prediction task combined with the volume of data available makes this a promising machine-learning application. Yet comparing the algorithm to judges proves complicated. First, the available data are generated by prior judge decisions. We only observe crime outcomes for released defendants, not for those judges detained. This makes it hard to evaluate counterfactual decision rules based on algorithmic predictions. Second, judges may have a broader set of preferences than the variable the algorithm predicts; for instance, judges may care specifically about violent crimes or about racial inequities. We deal with these problems using different econometric strategies, such as quasi-random assignment of cases to judges. Even accounting for these concerns, our results suggest potentially large welfare gains: one policy simulation shows crime reductions up to 24.7% with no change in jailing rates, or jailing rate reductions up to 41.9% with no increase in crime rates. Moreover, all categories of crime, including violent crimes, show reductions; these gains can be achieved while simultaneously reducing racial disparities. These results suggest that while machine learning can be valuable, realizing this value requires integrating these tools into an economic framework: being clear about the link between predictions and decisions; specifying the scope of payoff functions; and constructing unbiased decision counterfactuals.
My Notes
- Authors
- Jon Kleinberg, Himabindu Lakkaraju, Jure Leskovec, Jens Ludwig, Sendhil Mullainathan
- Summary
- A case-study demonstrates how Machine Learning can potentially improve judges’ decisions about which arrestees to release on bail.
The paper argues that judges’ bail decisions in New York could be improved with the aid of computer algorithms for predicting the risk that a person won’t show up for trial. It’s not just a matter of the algorithm outperforming judges on some specific metric; rather, the paper argues that judges are misranking people. By identifying high-flight-risk people who are likely to be released by judges, the algorithm could either reduce flight while keeping the same number in jail, or reduce those in jail without increasing total flight.
The authors argue that their algorithm outperforms judges because its decisions are less noisy, or because (for an alternate training of the algorithm) it implicitly acts based on the combined wisdom of multiple judges.
Their algo was trained to predict the risk that someone wouldn’t show up, but judges are also concerned about the harm caused by re-offenders on parole. However, they also addressed this criticism: Their algorithm was still able to outperform judges at reducing predicted violent reoffenses (because reoffense is correlated with failure to show up to court). So the algorithm’s reduction in ‘noisy’ decision-making is enough to outweigh the potential misalignment of objective.