Bail reform: Public’s opinion on using pretrial risk assessment algorithms

Bail reform: Public’s opinion on using pretrial risk assessment algorithms

Incorporating the public’s opinion into the decision-making process may promote the legitimacy of California’s new pretrial process and the use of risk assessment algorithms in the criminal justice system in general. This is the bottom line of a recently published article in Psychology, Public Policy, and Law. Below is a summary of the research and findings as well as a translation of this research into practice.

pppl

Featured Article | Psychology, Public Policy, and Law | 2020, Vol. 26, No. 1, 1–9

Public’s view of risk assessment algorithms and pretrial decision making

Author

Nicholas Scurich, University of California at Irvine
Daniel A. Krauss, Claremont McKenna College

Abstract

Risk assessment algorithms are increasingly—and controversially— being used to inform whether criminal defendants are released or held in custody prior to their adjudications. A representative sample of Californians (n = 420)—the most recent state to consider eliminating cash bail and adopt an algorithmic approach to pretrial detention—was used to assess public knowledge and general support for the new law. The sample evidenced limited awareness of bail reform, was mixed in support of change to the existing system, and believed that an algorithm would augment rather than decrease racial and socioeconomic disparities in the criminal justice system. In terms of actually implementing a risk assessment algorithm for the purpose of pretrial decision making, it is ultimately a human decision maker who must apply a decision threshold and determine whether a given risk level is sufficient to occasion a particular course of action (e.g., deny pretrial release). The sample was also queried about their pretrial decision thresholds. The average respondent’s decision threshold for “low risk” (or pretrial release) was 33%, indicating a 33% or less likelihood of failing to reappear or committing a new crime was tolerable, and 60% for “high risk” (or confinement), indicating a likelihood of 60% or greater of failing to reappear or committing a new crime was acceptable to deny a defendant release. Incorporating the public’s values into the decision-making process is likely to promote the legitimacy of the use of risk assessment algorithms in the criminal justice system.

Keywords

criminal justice reform, risk assessment, algorithms, decision making, bail

Summary of the Research

“The problems associated with mass incarceration have recently led to numerous calls—and bipartisan support—for criminal justice reform. One approach that has been advanced to decrease mass incarceration while increasing consistency and transparency in decision making involves the use of risk assessment algorithms. Risk assessment algorithms employ a predetermined number of empirically validated risk factors, such as number of previous convictions, current offense, offender age, and so forth, to generate an estimate of the likelihood that a given defendant will misbehave in the future. This estimate can be used to inform a variety of legal decisions, such as whether or not to deny pretrial release for a criminal defendant, or whether to increase or decrease a term of incarceration after conviction.” (p. 1)

“The general superiority of algorithmic approaches to decision-making is supported by a substantial corpus of empirical research. Yet, many scholars and commentators remain skeptical of these more mechanistic approaches for a variety of reasons, particularly when they are applied in the criminal justice system. General critiques of algorithmic approaches include distrust of their: (a) accuracy (i.e., algorithms simply cannot effectively predict the complexity of human behavior); (b) transparency in creation and design, the so-called “black box” problem (i.e., the weight given to each factor in the algorithm and how they are combined to render a decision may be proprietary or overly complicated and not subject to verifiable reexamination); and (c) implementation (e.g., decision makers may be more likely to deviate from the algorithm toward detention for an African American defendant and more likely to deviate toward release for a Caucasian one).” (p.1)

“Another concern is that the use of risk assessment algorithms will indirectly exacerbate racial and socioeconomic disparities because several common risk factors (e.g., lack of employment stability or failure to have a two-parent household) are putative “proxies” for race and poverty.” (p. 1)

“Pretrial detention has become a flashpoint for both criminal justice debate and reform as jails across the nation are near capacity. […] Under the current regime, judges must first ascertain whether a defendant is even eligible for pretrial release. Defendants accused of certain crimes (e.g., aggravated murder) are typically ineligible for pretrial release. For those who are eligible, the bail system requires defendants to offer a monetary bond in order to ensure they will reappear in court. Defendants who cannot post this bond remain in custody even though they are theoretically eligible for release. Research indicates that this requirement—the posting of monetary bonds—is largely responsible for the disparate detention of minorities and the socioeconomically disadvantaged, and increased overcrowding in detention facilities.” (p. 2)

“The State of California—the largest state criminal justice system in the nation—is at the forefront of bail reform. To alleviate problems associated with pretrial detention, California’s Governor signed into law Senate Bill (SB) 10 (The California Money Bail Reform Act of 2017, effective October 1, 2019). SB 10 replaces the existing bail structure in California with a system that uses risk assessment algorithms to aid judges in determining which defendants should be released and which should remain in custody, and does not require the defendant to post a monetary bond. Specifically, the risk assessment algorithm produces an estimate of (a) the likelihood that a defendant will commit a new offense while in the community and (b) the likelihood that the defendant will fail to reappear in court. The judge must then determine whether the estimate reflects a “high” level of risk, in which case the defendant will be held in custody, or “low” risk, in which case the defendant will be granted release without being required to post a monetary bond.” (p. 2)

“As is apparent, the use of risk assessment algorithms for pretrial decision making does not supplant judicial decision making; rather, risk assessment algorithms supply information to a judge who must then exercise judgment in determining whether or not a given risk level is sufficiently “high” or “low” to justify denying or granting pretrial release, respectively.” (p. 2)

“No research exists regarding the public’s perceptions of these initiatives and whether the public believes these new procedures will achieve their stated goals. The public is an important stakeholder group to consider for several reasons. First, state judges who determine whether or not to grant pretrial release are elected by the citizenry, presumably to implement the values and objectives of their respective constituency. […] Second, regardless of whether or not algorithmic risk assessment ameliorates inequities within the criminal justice system, a large body of research suggests that the process, rather than the outcome per se, matters a great deal to the public. […] Third, the public’s view of the California’s bail reform initiative became even more important when the initial implementation of the bill was blocked by special interest groups.” (pp. 2–3)

“The present research surveyed a representative sample of Californians concerning SB 10’s pretrial risk assessment algorithm. In particular, the study examines their viewpoints toward the law itself, the factors they believe should be used in determining pretrial detention, and their beliefs concerning whether the algorithmic risk assessment will increase or decrease racial disparities in the criminal justice system. Additionally, the study elicited decision thresholds for “low” and “high” risk, which correspond to pretrial release and detention decisions respectively.” (p. 3)

“The final sample consisted of 420 respondents, of which 50.2% were female. The median age was 46 years old (interquartile range = 29). In terms of the racial composition of the sample, 37.4% of the sample identified as White, 39% as Hispanic, 6.4% Black or African American, 15.2% Asian, and the rest “other.” Nearly half the respondents affiliated with the Democratic political party (48%), while 21.4% described themselves as Republicans, 20.2% as Independents, and 10% selected “other.” With regard to highest level of academic achievement, 3.9% had not completed high school, 17.4% of the sample had obtained a high-school diploma, 43.8% obtained a 2-year college degree, 24.5% obtained a 4-year college degree, and 10.4% obtained a professional degree or doctorate. The majority of our sample (79%) had never been arrested; of those who had been arrested 47.7% had been arrested once, 27.3% had been arrested twice, and the remaining 25% either preferred not to say or had been arrested three or more times.” (p. 3)

“Several key findings are evident from this study, including: the participants were largely unaware of potential changes to their states’ bail policy; for those who were aware, support was mixed, and they were particularly concerned that the use of risk assessment tools may increase racial disparities in the criminal justice system; and the median decision threshold for “low risk” was 33% and 60% for “high risk,” though the thresholds were highly variable.” (p. 5)

“There are number of possible reasons for the public’s concern with using risk assessment algorithms to make pretrial detention decisions. First, the public may hold a general “algorithm aversion,” in that they believe that human behavior, such as failure to reappear in court or committing a new offense, is too complex to be accurately predicted by any algorithm.” (p. 5)

“Another possible explanation of the public’s concern with using algorithmic risk assessment instruments to inform pretrial decision making could stem from their belief that the use of algorithms will increase racial and socioeconomic disparities. Overall, only 16% of Californians believed that the new algorithmic system would decrease these disparities, with the vast majority almost evenly split between the belief that such a structure would have no effect or worsen racial and socioeconomic disparities.” (p. 6)

“In sum, as criminal justice reform progresses, the question appears to be not whether we should utilize risk assessment algorithms, but rather how to most efficaciously use risk assessment algorithms in a manner that comports with democratic ideals. Incorporating the public’s values into the decision-making process is likely to promote the legitimacy of California’s new pretrial process and the use of risk assessment algorithms in the criminal justice system more generally.” (p. 7)

Translating Research into Practice

“We believe that if risk assessment algorithms are as likely as they appear to be in the future of pretrial detentions, that the best approach is to create “buy in” from the public, and to incorporate the public’s views, values, and objectives into the decision-making process. These factors could be incorporated in a variety of different ways. First, public opinion is germane to which risk factors should and should not be included in the algorithm. Our data reveal that Californians are open to the inclusion of most risk factors, though they care less about the presence of the defendant’s and victim’s family members and whether jails are overcrowded. They also appear to believe the age of the defendant is less important than other risk factors.” (p. 7)

“Perhaps more importantly, their decision thresholds should serve as guidance to judges and policymakers about the acceptable levels of risk the public is or is not willing to tolerate. They should be aware that the public is willing to accept a one-third chance of failing to reappear or of committing a new offense while out on pretrial release, and at the same time the public is not willing to risk a two-thirds chance or higher. To be sure, the reported decision thresholds were highly variable, and we are not suggesting that these values should be implemented in a rigid or mechanical fashion. But the findings should provide some guidance as judges attempt to operationalize the statutory language of high or low risk.” (p. 7)

“Some research finds that individuals are more accepting of algorithms when they can modify the outcome of the prediction. The potential problem with this approach is the tendency of decision makers to adjust the outcome in an opportunistic manner that leads to the desired outcome resulting in a decrease of predictive accuracy. The approach we suggest is to incorporate the public’s values and objectives in the process, not the outcome directly. This would provide a limited bulwark against result-driven overrides to the algorithmic risk assessment but would also take into account the public’s values and objectives.” (p. 7)

Other Interesting Tidbits for Researchers and Clinicians

“As with all surveys, our results are necessarily limited by the participants sampled (Californians), the time period in which data was collected, and the specific way the questions were worded and/or interpreted by the participants. Of note, we phrased questions in general, as they might appear on a ballot, used language directly from the Bill, and did not provide anecdotes about the possible impact(s) of adopting algorithmic bail.” (p. 6)

“Our data collection occurred at a strange moment in the adoption of the policy. At the time participants were sampled, the algorithmic bail initiative was set to become law in less than a year. After data collection occurred, the law was blocked by special interest groups and placed on referendum for the 2020 election […] It is possible that these subsequent changes could have affected both Californians’ knowledge and viewpoint of the proposed law.” (p. 7)

“With regard to our finding that most of our sample believed that the adoption of algorithmic tools would increase rather than decrease racial disparities in the criminal justice system, we were unable to determine which of the several possible rationales was the predominant reason for this belief. This necessarily limits the manner in which policymakers should consider educating the public and implementing these new laws. Future research could more deeply probe individuals’ distrust of these laws and algorithmic decision making more generally.” (p. 7)

Join the Discussion

As always, please join the discussion below if you have thoughts or comments to add!