Algorithmic Fairness

In recent years concern has grown about the use of algorithms to replace or aid human decisionmaking. A major topic in this area is whether algorithmic decisionmaking adheres to principles that humans associate with “fairness.” Perhaps the most common example is the question of whether an autonomous car, when faced with a situation in which a crash cannot be avoided, should act in the interest of its occupants or society at large.

My work in this area focuses instead on risk assessment, especially in the criminal justice system. The use of algorithms to aid judges and other officials in decisions about bail, sentencing, and parole is growing in popularity. Algorithms are often used to predict how likely an individual is to be arrested in the future, and these predictions are thresholded to create “risk categories.” This is controversial in part because of the potential for algorithms to “encode” sampling bias present in the training data. For example, because minority communities tend to be overpoliced, minorities are thought to be more likely to be arrested minor crimes. This results in arrest records being a biased measurement of the actual variable of interest (criminality). Because historical arrest and criminal records are an important covariate used in criminal justice risk assessment, predictions from risk assessment algorithms have the potential to overestimate the criminality of minorities.

In [1], we develop a method for attenuating or eliminating dependence of predictions on legally “protected” variables, such as race or sex. Unlike previous work in this area, our method is applicable to any number of protected variables and covariates on any measurement scale. We apply the method to a dataset related to the COMPAS risk assessment tool that is described in detail in ProPublica’s high profile analysis of racial bias in risk assessment. In more recent work [2], we design a similar method especially tailored to large p, small n settings, and suggest its broader application to removing dependence on a group membership variable — such as the clinical facility at which a test is performed — from covariates to reduce generalization error.

[1] Johndrow, J.E. and Lum, K. An algorithm for removing sensitive information: application to race-independent recidivism prediction. (2018)  Annals of Applied Statistics (to appear). arXiv preprint.

[2] Aliverti, E., Lum, K., Johndrow, J.E., and Dunson, D. Removing the influence of a group variable in high-dimensional predictive modelling. arXiv preprint.