Below are summaries of published and in-progress manuscripts from my PhD dissertation “Carceral Machines: Algorithmic Risk Assessment and the Reshaping of Crime and Punishment,” as well as my earlier publications in computational social science.
Ghosting the Machine: Judicial Resistance to a Risk Assessment Instrument
This project was awarded a Horowitz Foundation for Social Policy Grant and a University of Pittsburgh Year of Data & Society grant.
Recidivism risk assessment instruments are presented as an 'evidence-based' strategy for criminal legal reform – a way of increasing consistency in sentencing, replacing cash bail, and reducing mass incarceration. In practice, however, AI-centric reforms can simply add another layer to the sluggish, labyrinthine machinery of bureaucratic systems and are met with internal resistance. Through an interview-based study of 23 criminal judges and other criminal legal bureaucrats in Pennsylvania, with input and guidance from the Coalition to Abolish Death by Incarceration, I find that judges overwhelmingly ignore a recently-implemented sentence risk assessment instrument. I argue that this algorithm aversion cannot be accounted for by individuals' distrust of the tools or automation anxieties, per the explanations given by existing scholarship. Rather, the instrument's non-use is the result of an interplay between three organizational factors: county-level norms about pre-sentence investigation reports; alterations made to the instrument by the Pennsylvania Sentencing Commission in response to years of public and internal resistance; and problems with how information is disseminated to judges. These findings shed new light on the important role of organizational influences on professional resistance to algorithms, which helps explain why algorithm-centric reforms can fail to have their desired effect. This study also supports an empirically-informed argument for the abolition of risk assessment instruments: they are resource-intensive and have not demonstrated positive on-the-ground impacts.
Pruss, D. (2023). Ghosting the Machine: Judicial Resistance to a Risk Assessment Instrument. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (FAccT '23), (pp. 312-323). [preprint]
Short version featured on Data & Society's Points blog (2023): "How Recidivism Risk Assessments Impede Justice."
Research summary featured by the Montreal AI Ethics Institute (2023).
Values in Science and the Jurisprudence of Risk Assessment Instruments
This paper is published in Philosophy of Science and won the Mary B. Hesse Graduate Student Essay Award, awarded by the Philosophy of Science Association to the best single-authored paper submitted by a graduate student.
In philosophy of science, the value-ladenness of technology is typically framed around epistemic risk – that is, the relative costs of different kinds of errors in knowledge production. In the context of AI/ML, this is subsumed under the category of algorithmic bias. I examine another sense of value-ladenness: algorithmic methods are not only themselves value-laden but also introduce value into how we reason about their domain of application. I call this phenomenon 'domain distortion'. Using insights from jurisprudence, I show that the use of recidivism risk assessment instruments requires implicit normative commitments that can worm their way into how we reason about the law, providing a distinctive avenue for social values to enter the legal process. Specifically, the use of risk assessment instruments requires a commitment to a version of legal formalism and blurs the distinction between liability assessment and sentencing, which requires a consequentialist position on the purposes of criminal punishment and distorts how the domain of criminal punishment is conceived.
The Limits of Algorithmic Fairness Audits
Proponents of risk assessment tools tend to emphasize their objectivity and superiority to human judgment, while critics tend to emphasize the tools' racially biased predictions. Accordingly, audits of risk assessment instruments focus on technical benchmarks of accuracy and fairness. In this paper, I sketch the bounds on what technical audits like these are (and are not) able to demonstrate about the bias and impacts of algorithmic systems. I focus on the formal fairness definitions used in the field of fair machine learning, also known as algorithmic fairness. Through an analysis of standard statistical and causal measures of fairness, I argue that the methodology of algorithmic fairness reproduces the shortcomings of mechanical objectivity – the minimization of human bias via strict rule-based protocols – but on a meta-level. Much like mechanical objectivity is intended to remove individual or idiosyncratic (human) bias through the use of a mechanical procedure (such as an algorithm), meta-mechanical objectivity is intended to remove (algorithmic) bias through conformity to mechanical fairness rules. I show that the range of criticisms of algorithmic fairness approaches can be helpfully understood on this analogy. I illustrate the limits of technical audits that use these measures through an analysis of Carnegie Mellon University's audit of a recently-implemented recidivism risk assessment instrument, Pennsylvania's Sentence Risk Assessment Instrument.
Pruss, D. (2023). Meta-Mechanical Objectivity and the Limits of Algorithmic Fairness Audits (manuscript). Please contact me for a chapter draft.
Crime Prediction in the Soviet Union: History of Soviet Legal Cybernetics
Kudriavtsev, V. N. and Eisman, A. A. (1964). "Kibernetika v Bor'be s Prestupnost'iu [Cybernetics in the Fight Against Crime]."
The archival research for this project was generously funded by the Wesley C. Salmon Fund.
From the time Joseph Stalin took over the Soviet Union in the 1930s to his death in 1953, Stalinist ideology permeated every part of Soviet life, including science. In the 1960s, the field of cybernetics, previously derided as a western pseudoscience, became prominent and was applied in many disciplines that sought to ground themselves in mathematics and thereby purge themselves of Stalinist ideology. This paper focuses on 'legal cybernetics', the application of the mathematical apparatus of cybernetics by Soviet criminologists as part of an attempt to shed the label of harmful pseudo-science that criminology had acquired during the Stalinist era. Using historical material accessed in archives in the Russian State Library in Moscow in 2018, I argue that while cybernetics was an effective rhetorical device to elevate the scientific status of criminology, it also served to reinforce and obscure existing ideological biases in the field. As an illustration, I focus on Vladimir Nikolaevich Kudriavtsev’s applications of cybernetics to study the causes of crime and the "objective side of crime." I show that the exclusion of economic causal variables in his cybernetic models of crime served to reinforce long-standing dogma in Soviet criminology.
Pruss, D. (2023). Mathematizing Crime and Punishment: Cybernetics, Criminology, and Objectivity in the Post-Stalin Soviet Union (manuscript). Please contact me for a chapter draft.
Zika Discourse in the Americas: A Multilingual Topic Analysis of Twitter
This is a computational social science project I conducted and first-authored during my year as a PhD student in the Information Science Department at the University of Colorado Boulder, prior to transferring to the University of Pittsburgh. This work was funded by a National Science Foundation graduate research fellowship.
Together with co-authors, we examined Twitter discussion surrounding the 2015 outbreak of Zika, a virus that is most often mild but has been associated with serious birth defects and neurological syndromes. We procured and analyzed a corpus of 3.9 million tweets mentioning Zika geolocated to North and South America, where the virus was most prevalent. Using a multilingual topic model, a type of machine learning, we automatically identified and extracted the key topics of discussion across the dataset in English, Spanish, and Portuguese. We examined the variation in Twitter activity across time and location, finding that rises in activity tended to follow major events, and that geographic rates of Zika-related discussion are moderately correlated with Zika incidence (ρ = .398).
Pruss, D., Fujinuma, Y., Daughton, A. R., Paul, M. J., Arnot, B., Szafir, D. A., & Boyd-Graber, J. (2019). Zika Discourse in the Americas: A Multilingual Topic Analysis of Twitter. PLOS ONE, 14(5), e0216922. [paper]