Research

My research critically examines the social dimensions of AI/ML systems, with a focus on algorithms promoted by 'evidence-based' reforms in the US criminal legal system. My dissertation research centered on recidivism risk assessment instruments, which estimate an individual’s risk of future reconviction based on past data and are used to inform high-stakes decisions. I have also studied predictive policing tools and other carceral technologies. 

Below are summaries of work from my postdoctoral research at Harvard, my PhD dissertation "Carceral Machines: Algorithmic Risk Assessment and the Reshaping of Crime and Punishment," as well as my earlier publications in computational social science.

The Socially Contested Role of Algorithmic Bias in a Predictive Policing Tool (with Marta Ziosi)

This paper presents a critical, qualitative study of the social role of algorithmic bias in the context of the Chicago crime prediction algorithm, a predictive policing tool that forecasts when and where in the city crime is most likely to occur. Through interviews with 18 Chicago-area community organizations, academic researchers, and public sector actors, we show that stakeholders from different groups articulate diverse problem diagnoses of the tool's algorithmic bias, strategically using it as evidence to advance criminal justice interventions that align with stakeholders' positionality and political ends. Drawing inspiration from Catherine D'Ignazio's taxonomy of "refusing and using" data, we find that stakeholders use evidence of algorithmic bias to reform the policies around police patrol allocation; reject algorithm-based policing interventions; reframe crime as a structural rather than interpersonal problem; reveal data on authority figures in an effort to subvert their power; repair and heal families and communities; and, in the case of more powerful actors, to reaffirm their own authority or existing power structures. We identify the implicit assumptions and scope of these varied uses of algorithmic bias as evidence, showing that they require different (and sometimes conflicting) values about policing and AI. This divergence reflects long-standing tensions in the criminal justice reform landscape between the values of liberation and healing often centered by system-impacted communities and the values of surveillance and deterrence often instantiated in data-driven reform measures. We advocate for centering the interests and experiential knowledge of communities impacted by incarceration to ensure that evidence of algorithmic bias can serve as a device to challenge the status quo.

Ziosi, M.*, Pruss, D.* Evidence of What, for Whom? The Socially Contested Role of Algorithmic Bias in a Predictive Policing Tool.
(*both authors contributed equally.) Forthcoming in Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT '24). [paper] [preprint]

Featured on the Computer Says Maybe podcast, hosted by Alix Dunn. "What the FAccT? Evidence of bias. Now what?" July 12, 2024.

Ghosting the Machine: Judicial Resistance to a Risk Assessment Instrument

Recidivism risk assessment instruments are presented as an 'evidence-based' strategy for criminal legal reform a way of increasing consistency in sentencing, replacing cash bail, and reducing mass incarceration. In practice, however, AI-centric reforms can simply add another layer to the sluggish, labyrinthine machinery of bureaucratic systems and are met with internal resistance. Through an interview-based study of 23 criminal judges and other criminal legal bureaucrats in Pennsylvania, with input and guidance from the Coalition to Abolish Death by Incarceration, I find that judges overwhelmingly ignore a recently-implemented sentence risk assessment instrument. I argue that this algorithm aversion cannot be accounted for by individuals' distrust of the tools or automation anxieties, per the explanations given by existing scholarship. Rather, the instrument's non-use is the result of an interplay between three organizational factors: county-level norms about pre-sentence investigation reports; alterations made to the instrument by the Pennsylvania Sentencing Commission in response to years of public and internal resistance; and problems with how information is disseminated to judges. These findings shed new light on the important role of organizational influences on professional resistance to algorithms, which helps explain why algorithm-centric reforms can fail to have their desired effect. This study also supports an empirically-informed argument for the abolition of risk assessment instruments: they are resource-intensive and have not demonstrated positive on-the-ground impacts.

Pruss, D. (2023). Ghosting the Machine: Judicial Resistance to a Risk Assessment Instrument. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (FAccT '23), (pp. 312-323). [paper] [preprint]

Shortened versions of the paper are featured on Data & Society's Points blog (2023) and the Montreal AI Ethics Institute (2023).

This project was awarded a Horowitz Foundation for Social Policy Grant and a University of Pittsburgh Year of Data & Society grant

Value-Laden Science and the Jurisprudence of Risk Assessment Instruments

In philosophy of science, the value-ladenness of technology is typically framed around epistemic risk – that is, the relative costs of different kinds of errors in knowledge production. In the context of AI, this is subsumed under the category of algorithmic bias. I examine another sense of value-ladenness: algorithmic methods are not only themselves value-laden but also introduce value into how we reason about their domain of application. I call this phenomenon 'domain distortion'. Using insights from jurisprudence, I show that the use of recidivism risk assessment instruments requires implicit normative commitments that can worm their way into how we reason about the law, providing a distinctive avenue for social values to enter the legal process. Specifically, the use of risk assessment instruments requires a commitment to a version of legal formalism and blurs the distinction between liability assessment and sentencing, which requires a consequentialist position on the purposes of criminal punishment and distorts how the domain of criminal punishment is conceived.

Pruss, D. (2021). Mechanical Jurisprudence and Domain Distortion: How Predictive Algorithms Warp the Law. Philosophy of Science, 88 (5), 1101-1112. [paper] [preprint]

This paper won the Mary B. Hesse Graduate Student Essay Award, awarded by the Philosophy of Science Association to the best single-authored paper submitted by a graduate student.

Crime Prediction in the Soviet Union: History of Soviet Legal Cybernetics

Kudriavtsev, V. N. and Eisman, A. A. (1964). "Kibernetika v Bor'be s Prestupnost'iu [Cybernetics in the Fight Against Crime]." Photographed in the Russian State Library in 2018.

From the time Joseph Stalin took over the Soviet Union in the 1930s to his death in 1953, Stalinist ideology permeated every part of Soviet life, including science. In the 1960s, the field of cybernetics, previously derided as a western pseudoscience, became prominent and was applied in many disciplines that sought to ground themselves in mathematics and thereby purge themselves of Stalinist ideology. This paper focuses on 'legal cybernetics', the application of the mathematical apparatus of cybernetics by Soviet criminologists as part of an attempt to shed the label of harmful pseudo-science that criminology had acquired during the Stalinist era. Using historical material accessed in archives in the Russian State Library in Moscow in 2018, I argue that while cybernetics was an effective rhetorical device to elevate the scientific status of criminology, it also served to reinforce and obscure existing ideological biases in the field. As an illustration, I focus on Vladimir Nikolaevich Kudriavtsev’s applications of cybernetics to study the causes of crime and the "objective side of crime." I show that the exclusion of economic causal variables in his cybernetic models of crime served to reinforce long-standing dogma in Soviet criminology.

Pruss, D. (2023). Mathematizing Crime and Punishment:  Cybernetics, Criminology, and Objectivity in the Post-Stalin Soviet Union (dissertation chapter).

The archival research for this project was generously funded by the Wesley C. Salmon Fund. 

The Limits of Algorithmic Fairness Audits

Proponents of risk assessment tools tend to emphasize their objectivity and superiority to human judgment, while critics tend to emphasize the tools' racially biased predictions. Accordingly, audits of risk assessment instruments focus on technical benchmarks of accuracy and fairness. In this paper, I sketch the bounds on what technical audits like these are (and are not) able to demonstrate about the bias and impacts of algorithmic systems. I focus on the formal fairness definitions used in the field of fair machine learning, also known as algorithmic fairness. Through an analysis of standard statistical and causal measures of fairness, I argue that the methodology of algorithmic fairness reproduces the shortcomings of mechanical objectivity – the minimization of human bias via strict rule-based protocols – but on a meta-level. Much like mechanical objectivity is intended to remove individual or idiosyncratic (human) bias through the use of a mechanical procedure (such as an algorithm), meta-mechanical objectivity is intended to remove (algorithmic) bias through conformity to mechanical fairness rules. I show that the range of criticisms of algorithmic fairness approaches can be helpfully understood on this analogy. I illustrate the limits of technical audits that use these measures through an analysis of Carnegie Mellon University's audit of a recently-implemented recidivism risk assessment instrument, Pennsylvania's Sentence Risk Assessment Instrument. My findings affirm the urgency of adopting a participatory, human-centered model of algorithmic auditing.

Pruss, D. (2023). Meta-Mechanical Objectivity and the Limits of Algorithmic Fairness Audits (dissertation chapter)

Zika Discourse in the Americas: A Multilingual Topic Analysis of Twitter

Together with co-authors, we examined Twitter discussion surrounding the 2015 outbreak of Zika, a virus that is most often mild but has been associated with serious birth defects and neurological syndromes. We procured and analyzed a corpus of 3.9 million tweets mentioning Zika geolocated to North and South America, where the virus was most prevalent. Using a multilingual topic model, a type of machine learning, we automatically identified and extracted the key topics of discussion across the dataset in English, Spanish, and Portuguese. We examined the variation in Twitter activity across time and location, finding that rises in activity tended to follow major events, and that geographic rates of Zika-related discussion are moderately correlated with Zika incidence (ρ = .398)

Pruss, D., Fujinuma, Y., Daughton, A. R., Paul, M. J., Arnot, B., Szafir, D. A., & Boyd-Graber, J. (2019). Zika Discourse in the Americas: A Multilingual Topic Analysis of Twitter. PLOS ONE, 14(5), e0216922. [paper]

This work was funded by a National Science Foundation graduate research fellowship.