The Role of In-Group Bias and Balanced Data: A Comparison of Human and Machine Recidivism Risk Predictions

Authors:

Arpita Biswas (Indian Institute of Science, Bangalore, India)
Marta Kolczynska (Institute of Political Studies of the Polish Academy of Sciences, Warsaw, Poland)
Saana Rantanen (University of Turku, Department of Social Sciences, Turku, Finland)
Polina Rozenshtein (Institute of Data Science, National University of Singapore, Singapore)

DOI: https://doi.org/10.1145/3378393.3402507

Session: 3.4. AI and social impact

Abstract: Fairness and bias in automated decision-making gain importance as the prevalence of algorithms increases in different areas of social life. This paper contributes to the discussion of algorithmic fairness with a crowdsourced vignette survey on recidivism risk assessment, which we compare to previous studies on this topic and to predictions of an automated recidivism risk tool. We use the case of the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) and the Broward County dataset of pre-trial defendants as a data source and for purposes of comparability with the earlier analysis. In our survey, each respondent assessed recidivism risk for a set of vignettes describing real defendants, where each set was balanced with regard to the defendants’ race and reoffender status. The survey ensured a 50:50 ratio of black and white respondents. We found that predictions in our survey—while less accurate—were considerably more fair in terms of equalized odds than previous surveys. We attribute it to the differences in survey design: using the balanced set of vignettes and not providing feedback after responding to each vignette. We also analyzed the performance and fairness of predictions by race of respondent and defendant. We found that both white and black respondents tend to favor defendants of their own race, but the magnitude of the effect is relatively small. In addition to the survey, we train two statistical models, one trained with balanced data and other with unbalanced data. We observe that the model trained on balanced data is substantially more fair and possess less in-group bias.