Skip to content

Fairness, objectivity, and probability: Perceptions of defendant’s future risk of violence by jurors

Fairness, objectivity, and probability: Perceptions of defendant’s future risk of violence by jurors

When testifying regarding violence risk assessment, the experts are advised to present the information in multiple data formats, including comprehensive risk management plan in conjunction with objective estimates of future behavior, to avoid overestimation of risk by the jurors. This is the bottom line of a recently published article in Psychology, Public Policy, and Law. Below is a summary of the research and findings as well as a translation of this research into practice.

Featured Article | Psychology, Public Policy, and Law | 2019, Vol. 25, No. 2, 92–106

Communicating violence risk during testimony: Do different formats lead to different perceptions among jurors?

Author

Ashley B. Batastini, University of Southern Mississippi
Michael J. Vitacco, Augusta University
Lauren C. Coaker, University of Southern Mississippi
Michael E. Lester, University of Southern Mississippi

Abstract

The legal system often charges forensic clinicians with the task of assisting the court in making decisions about a defendant’s risk for violence. The extent to which these evaluations are useful depends, in part, on how the results are communicated to and understood by the trier of fact. Using a sample of 155 participants who previously served as a criminal trial juror, this study examined the effects of various risk communication formats on participants’ perceptions of a hypothetical defendant’s risk level, including his likelihood of reoffense and risk category, as well as sentencing and release decisions. Results consistently showed that when risk data was not anchored by an absolute recidivism estimate, predictions of future violence were highest. That is, when risk was stated only as an ordinal category (medium risk) or only in terms of needed interventions, participants severely overestimated the defendant’s likelihood of violence. In the context of numerical data, the use of elaborative strategies (i.e., base rate examples and visual aids) did not impact risk perceptions. However, no between-groups differences were found across participants’ decisions regarding sentencing and community release. Overall, participants tended to adhere to the expert’s opinion when judging risk. Implications for best practices and future research are discussed, including the need for experts to simplify information during testimony and for risk researchers to work toward a better understanding of which strategies impact layperson perceptions of risk testimony and under what conditions.

Keywords

violence risk assessment, risk communication, actuarial, structured professional judgment, expert testimony

Summary of the Research

“Violence risk assessments are often relied on in contentious trials dealing with issues of life and liberty. […] The use of violence risk assessments in court has almost become routine; however, their usefulness depends in large part on how well the results are communicated by the expert and understood by the trier of fact (e.g., trial judges, jurors, and parole boards). Although violence risk assessments have become commonplace, the road to their acceptability was not linear or easy, and best practices for communicating their results to laypersons have not been clearly established within the field.” (p. 92)

“Barefoot v. Estelle (1983) is a United States Supreme Court case at the center of violence risk assessment, as it allowed for expert testimony on violence risk, not because mental health professionals are particularly good at making such decisions, but simply because if jurors with no specialized training are required to make decisions regarding dangerousness, so too can mental health professionals. […] Conversely, the argument against the use of opinions based on clinical judgment alone in the context of violence risk assessment was grounded in inaccurate predictions that led nonviolent individuals to be classified as likely violent. […] The field of psychology has made concerted efforts to move away from relying on clinical judgment exclusively when making predictions about future violence.” (p. 93)

“In the last 25 years, substantial progress has been made in the development of risk assessment instruments designed to predict violence risk. Specifically, assessments based on actuarial and structured professional judgment (SPJ) approaches predominate the prediction of violence risk in both clinical and forensic settings.” (p. 93)

“The most widely accepted methods of communicating risk findings include: (a) numerical estimates, (b) rank-ordered categories or “bins,” and/or (c) risk needs/management. Actuarial (or statistical) prediction methods mechanically quantify empirically determined factors, often through meta-analysis, that are highly correlated with some criterion behavior like violence recidivism. Factors predictive of violence can be static (e.g., victim type, perpetrator’s gender and age, and relation to victim) or dynamic (e.g., criminal associates, unproductive leisure time, and poor problem-solving skills) in nature. An examinees’ numerical score on these assessments is then compared with offenders with similar scores in the normed sample. Actuarial scales yield numerical probability estimates that may or may not be accompanied by a categorical interpretation, whereby an offender is described as falling into one of several risk bins (e.g., low, medium, and high).” (p. 93)

“Actuarial measures provide several ways of communicating numerical data derived from the calculation of risk factors, including comparative recidivism estimates, percentile ranks, or risk ratios Categorical estimates are also reported with SPJ tools like the HCR-20, which are based on the presence (or absence) of empirically relevant risk factors, but do not yield a numerical estimate. Recommendations for risk management—interventions that target dynamic risk factors and are intended to reduce future risk—are also commonly included when reporting both actuarial and SPJ tools, though SPJ tools typically offer a more structured approach to identifying management strategies than actuarial tools.” (p. 93)

“Some evidence suggests that the format in which data is presented makes a difference in cases involving the prediction of general an d sexual violence. […] As risk assessment tools become more advanced and complex, forensic clinicians may experience greater difficulty conveying their message in a manner that is easily understood by laypersons. Yet, the ability for jurors and judges to fully comprehend, appreciate, and incorporate expert testimony, including the science behind risk assessment tools, into their decisions is crucial for producing an outcome that is fair and impartial.” (p. 93)

“Because of their complexity, laypersons appear to have difficulty interpreting numerical prediction formats such as likelihood ratios, frequencies, or percentages (absolute recidivism rates) that often accompany violence risk testimony when an expert relied on actuarial scales for the basis of an expert’s opinion. […] Because of the challenges associated with probabilistic methods (perhaps especially absolute recidivism estimates), use of categorical bins seems to be the most preferred among both forensic clinicians and triers of fact. A primary reason cited by evaluators for their tendency to avoid probabilistic estimates was a belief that research did not support the predictive validity of specific numbers, and that describing predictions of violence with such apparent precision may mislead the triers of fact.” (p. 94)

“Still, some have argued that the use of categorical rankings alone generates more arbitrary and, thus, potentially inaccurate, conclusions about risk if not anchored numerically. […] While some mixed evidence exists regarding the usefulness of categorical communication formats, research generally supports the accuracy of numerical data in predicting future risk with a reasonable degree of scientific certainty. But, the reluctance of clinicians to use it, and the difficulty of properly explaining the underlying science of actuarial prediction models to laypersons poses a dilemma: how can violence risk evaluators communicate accurate predictions of risk in a way that is most optimally understood by the people who need to hear it?” (p. 94)

“In an effort to better understand the nuances of risk communication and the strategies that may help, the present study has two primary aims: (a) to investigate potential differences in legal decision-making across common modes of risk communication (including combined methods), and (b) further explore the use of additive strategies such as base rate examples and visual aids to enhance understanding of numerical data. In this study, we focus on numerical risk presented as an absolute probability of reoffending. This study is not about the accuracy of risk assessment tools; it is about the consequences of different expert communication methods on the perceptions of a largely layperson audience.” (p. 95)

“Participants in this study were at least 18 years old, spoke fluent English, self-reported previous experience serving as a juror on at least one criminal court case, and responded correctly to several validity questions assessing attentiveness. […] The final sample included a total of 64 male (41%) and 91 female (58.3%) respondents (N = 155). Participants varied in age from 23 to 79 (M = 51.0, SD = 16.2) with an average of 14.7 (SD = 2.4) years of education. Over 85% identified as White (n = 134; 86.5%); African American (n = 7; 4.5%), Asian American (n = 6; 3.8%), Pacific Islander (n = 1; 0.6%), and Other (n = 7; 4.5%) populations were also represented. Unfortunately, the underrepresentation of minorities in this sample is similar to that of actual juries.” (p. 95)

“[Participants] were randomly assigned to one of six study conditions varying according to the type of risk information that was provided: 1. Risk management (Management); 2. Numerical without base rate elaboration (Numerical); 3. Numerical with base rate elaboration (NumericalBR); 4. Categorical risk (Categorical); 5. Hybrid without base rate elaboration (Categorical + Numerical+ Management; Hybrid); 6. Hybrid with base rate elaboration (Categorical + Numerical + Management + Base Rate; HybridBR). Regardless of assigned condition, participants were then instructed to listen to a recorded excerpt of a violence risk evaluation written by a licensed psychologist with more than 15 years of experience in the field of forensic mental health. A typed version of the excerpt was also included. […] Following the defendant’s background information, participants were instructed to listen to the expert’s testimony and were again provided a transcript to follow along. Testimony was based on results from the Violence Risk Appraisal Guide. […] Following the introduction to the VRAG, participants heard [defendant’s] risk assessment results respective to the condition they were assigned. ” (p. 96)

“In considering decision-making, several findings from this study are especially pertinent. A consistent pattern emerged showing that risk management (or action-oriented) information and a categorical representation of risk presented in isolation (without absolute recidivism risk) was related to higher ratings of risk. This finding held when respondents were asked to report their estimates for violence within 7- and 10-years and to rank risk as a category. Furthermore, management and categorical formats led to higher ratings of risk even when crime types were considered separately across general and sexually violent behavior.” (p. 101)

“Of interest to the authors, however, risk communication format did not appear to impact participants’ ratings of risk when they were asked to predict the defendant’s likelihood of violence outside the expert’s specified time constraints. From this result, it could be inferred that, even in the context of numerical presentations, jurors will significantly overestimate a defendant’s risk if the task is to predict violence over an indefinite period of time—or perhaps any period of time that does not correspond to the timeframes used by the expert in testimony. In other words, without expert guidance, jurors seemed lost on how to determine risk and erred on the side of caution.” (p. 102)

“Results of the present study failed to support any real benefit of using a bar chart and a medical-based example in the context of absolute recidivism risk. To be clear—all three conditions included a very basic discussion about the risk of violent reoffending for the general population, so we nonetheless maintain that at least a mention of the base rate for violent offending is required. Yet, that seems simple enough, as jurors generated congruent ratings of risk regardless of the added strategies.” (p. 102)

“When considering some additional descriptive data and secondary analyses, the results from the current study indicate a general concordance between expert testimony and juror perceptions of risk. In other words, jurors tended to act in accordance with the information provided by the expert. […] Attitudes toward the offender, the seriousness of the offense, or inattention could impact the extent to which jurors adhere to expert testimony. Despite this, clear evidence of base rate neglect was not found. In addition to largely sticking with the expert’s predictions about the likelihood of future violence, participants also generally adhered to the instructed VRAG parameters.” (p. 102)

“While others have begun exploring best practices in violence risk communication, none to our knowledge have compared the most common risk communication formats while also testing several elaborative strategies to improve juror understanding of risk. […] our results suggest that (a) management and categorical formats, absent of any actuarial data, tend to produce higher estimates of risk, (b) using visual aids to contextualize base rates appears unnecessary for understanding actuarial data, and (c) in an overall general sense, laypersons follow expert testimony regarding estimates of risk but other factors (e.g., crime severity) may have a greater influence on their legal decisions.” (p. 104)

“The one message that remains clear is the need to continue investigating various risk communication techniques across multiple settings with multiple types of defendants, crimes, and triers of fact. Given the myriad of factors that could complicate the decision-making process, it may be impractical to strive for a set of guidelines that guarantee forensic clinicians’ messages will be adequately heard by laypersons; however, it seems well within reason to work as a field toward a clearer understanding of what helps and under what circumstances.” (p. 104)

Translating Research into Practice

“Findings from the present study suggest that risk management data, when presented in isolation, leads to more severe overestimations of risk. Yet, the presentation of risk management recommendations is frequently of value to the court who are often concerned that appropriate safeguards are in place to prevent violence and reduce the likelihood of recidivism.” (p. 103)

“More important, results of this study showed that risk management data was not associated with overestimates of risk when combined with numerical and categorical data. […] Risk estimates presented in conjunction with management recommendations based on R-N-R [Risk-Need-Responsivity] would be important to share with triers of fact before making determinations regarding initial placement or release. To ensure triers of fact receive the most useful and accurate information, clinicians are advised to testify about risk using multiple data formats; that is, including a comprehensive risk management plan in conjunction with more objective estimates of future behavior.” (p. 103)

“A more challenging aspect of expert testimony stems from results suggesting that laypersons generally adhere to the testimony provided by the expert but may still make decisions based on other information or biases that override the actuarial information provided by the expert. […] We suggest that experts explicitly provide information to the court or retaining party (to allow for such testimony to take place) regarding how the index offense contributed to their overall judgment of violence risk. The goal of risk assessment testimony should be to provide clear and accurate information to fully inform the decision-making process.” (p. 103)

“Finally, findings from this study are also relevant for consultants and attorneys when preparing cases. […] Attorneys and consultants who prepare clinicians for court can ensure clinicians gather and evaluate information that will enable the trier of fact to accurately understand the individual’s risk and any strategies available to reduce this risk. A concern borne from these data is that risk results could intentionally be used to misrepresent risk and bias the trier of fact. […] Even for those whose job it is to represent one side of the courtroom or the other, we believe there is a moral obligation to present risk data in a way that is both understandable and comprehensive enough to assist the court in rendering a just decision.” (p. 103)

Other Interesting Tidbits for Researchers and Clinicians

“Despite the strengths of this study, there are several limitations that warrant consideration. First, the sample was predominantly European American with an African American defendant. While secondary analyses suggested that participants’ race did not significantly impact the observed outcomes, this analysis was weakened by the creation of a dichotomous variable. […] Second, the high-stakes nature of the crime may have led to some of the null effects observed regarding sentencing and comfort with release. It is important to consider how jurors may perceive violence risk in other contexts and various criminal behaviors. […] Third, the risk assessment information presented in the current study may not generalize to actual courtroom testimony. Several procedural factors (e.g., jury deliberation, opposing expert findings) were not incorporated using this experimental design. Likewise, jurors were only exposed to an audio recording of the expert’s testimony.” (pp. 103–104)

“Fourth, future studies should consider altering how risk data is presented and how population base rates are phrased, as well as the types of graphical or illustrative examples used to aid understanding. […] Finally, while using a sample of former jurors from criminal trials was purposeful to better understand layperson perceptions in a courtroom setting, judges are perhaps more often the receivers of risk data in making decisions about sentencing or commitment. […] It is […] important to continue exploring differences among judges and other decision-makers (e.g., parole/probation boards, hospital administrators) who are routinely presented with the results of violence risk assessments.” (p. 104)

Join the Discussion

As always, please join the discussion below if you have thoughts or comments to add!