Skip to content

Risk assessment tools: Harmful or Helpful?

Risk assessment tools: Harmful or Helpful?

Use of a risk assessment tool for pre or post-trial decisions may help reduce rates of incarceration while still protecting public safety. However, there is a strong need for more rigorous research before clear conclusions can be drawn. This is the bottom line of a recently published article in Law and Human Behavior. Below is a summary of the research and findings as well as a translation of this research into practice.

Featured Article | Law and Human Behavior | 2019, Vol. 4, No. 4, 490-502

Impact of Risk Assessment Instruments on Rates of Pretrial Detention, Postconviction Placements, and Release: A Systematic Review and Meta-Analysis

Author

Jodi L. Viljoen, Simon Fraser University
Melissa R. Jonnson, Simon Fraser University
Dana M. Cochrane, Simon Fraser University
Lee M. Vargen, Simon Fraser University
Gina M. Vincent, University of Massachusetts Medical School

Abstract

Objectives: Many agencies use risk assessment instruments to guide decisions about pretrial detention, postconviction incarceration, and release from custody. Although some policymakers believe that these tools might reduce overincarceration and recidivism rates, others are concerned that they may exacerbate racial and ethnic disparities in placements. The objective of this systematic review was to test these assertions. Hypotheses: It was hypothesized that the adoption of tools might slightly decrease incarceration rates, and that impact on disparities might vary by tool and context. Method: Published and unpublished studies were identified by searching 13 databases, reviewing reference lists, and contacting experts. In total, 22 studies met inclusion criteria; these studies included 1,444,499 adolescents and adults who were accused or convicted of a crime. Each study was coded by 2 independent raters using a data extraction form and a risk of bias tool. Results were aggregated using both a narrative approach and meta-analyses. Results: The adoption of tools was associated with (a) small overall decreases in restrictive placements (aggregated odds ratio [OR] = 0.63, p < .001), particularly for individuals who were low risk and (b) small reductions in any recidivism (OR = 0.85, p < .020). However, after removing studies with a high risk of bias, the results were no longer significant. Conclusions: Although risk assessment tools might help to reduce restrictive placements, the strength of this evidence is low. Furthermore, because of a lack of research, it is unclear how tools impact racial and ethnic disparities in placements. As such, future research is needed.

Keywords

risk assessment, violence, reoffending, incarceration, racial and ethnic disparities

Summary of the Research

“Risk of recidivism tools are widely used in criminal and juvenile justice settings. In some cases, these tools are used primarily to guide case management and treatment-planning. However, in other cases, tools are used to inform high stakes decisions about custodial placements. This includes front-end decisions about who to detain prior to trial, as well as later decisions about postconviction incarceration and release from prison. For instance, 88% of American pretrial agencies use risk tools to guide pretrial detention decisions, 20 states use them to guide sentencing decisions, and up to 28 states use them to guide parole release decisions. In juvenile probation settings, close to 40 states have adopted risk tools on a state-wide basis for dispositional planning. Furthermore, many organizations, policymakers, and scholars explicitly encourage the use of risk tools in placement decisions” (p. 397-398).

“[S]ome authors argue that risk tools could help reduce mass incarceration without jeopardizing public safety, whereas others argue that these tools may exacerbate racial disparities in sentencing. However, it is currently unclear which perspectives are accurate. Although a recent systematic review examined how risk tools impact treatment-planning and risk management, that review did not examine how the adoption of tools affects overall rates of placements” (p. 399).

“To ensure that we reported our systematic review in a thorough, rigorous, and transparent manner, we followed criteria set forth in the Preferred Reporting Items for Systematic Reviews and Meta- Analyses statement, the AMSTAR 2 tool, and the Risk of Bias in Systematic Reviews tool. Our review question, search strategy, inclusion/exclusion criteria, data extraction materials (e.g., risk of bias assessment), and data analytic plan were established a priori” (p. 399).

“To help inform debates about the impact of risk tools on restrictive placements, we conducted a systematic review and meta- analysis. Given that much of the research in this area was in the form of unpublished reports, we systematically searched 13 data- bases of published and unpublished sources, hand-searched reference lists, and contacted experts. Although our review captured 22 studies with 1,444,499 defendants and offenders from 30 independent sites, many of the studies failed to match tool and no-tool groups on key characteristics (e.g., offense history) or control for historical trends, such as decreases in incarceration rates over time. In addition, in some studies, other initiatives were implemented at the same time as tools (e.g., alternatives to detention programs), making it difficult to determine if the results were due to the tool or these other initiatives. Furthermore, 40.9% of included studies did not contain the necessary statistical information to include in a meta-analysis (despite efforts to obtain such information from study authors)” (p. 408).

“As such, to provide a more comprehensive synthesis of findings, we conducted both a meta-analysis of the subset of studies that could be empirically synthesized, as well as a narrative review of the full set of studies. We also tested whether results remained the same after removing studies with a serious risk of bias. Overall, the meta-analysis provided a similar pattern of results as the narrative review, providing some confirmation of the findings. However, because results were attenuated after controlling for study limitations, only modest and tentative conclusions can be drawn. Also, given that most of the included studies were conducted in the United States, it is unclear whether the findings generalize to other countries. With these caveats in mind, key findings are discussed” (p. 408).

“Although some researchers and policymakers have hypothesized that the adoption of tools might reduce rates of incarceration, we found tenuous results. When we examined the full set of studies (regardless of their quality), the adoption of risk tools appeared to be associated with small but significant reductions in restrictive placements. Specifically, when tools were used, fewer defendants were placed in detention prior to trial, and more inmates were released from custodial centers. However, results varied between studies, and we did not find significant reductions in postconviction placements. Moreover, when we removed studies with a serious risk of bias, the findings were no longer significant. As such, the overall strength of evidence that tools reduce placements is low” (p. 410).

“The results of our systematic review confirmed that recidivism rates did not increase following the adoption of a risk assessment tool even when incarceration rates decreased. Prior research has found that incarceration is not an effective method to reduce recidivism. Our findings similarly illustrate that it is possible to reduce incarceration rates without increasing recidivism. However, although recidivism did not increase, we did not find clear and consistent evidence that the use of tools led to a significant decrease in recidivism. In most studies, rates of any recidivism, violent recidivism, and violations did not significantly change following the adoption of risk tools. In addition, in the meta-analysis, reductions in recidivism were not significant after removing studies with a serious risk of bias. As such, the strength of evidence that tools reduce recidivism is low. A prior systematic review also reported modest and mixed findings on whether the adoption of tools decreases recidivism rates” (p. 410).

“Even if the use of tools in sentencing has certain benefits, one of the major concerns is that they might exacerbate racial and ethnic disparities in placements. Unfortunately, our review found that research is insufficient to offer conclusions. Only six of the 22 studies included in this review reported results on how the adoption of tools impacted disparities, and all but one of these studies had a serious risk of bias. Furthermore, these studies found variable results. In two studies, placements decreased more for White defendants than defendants of color, thereby increasing disparity. Conversely, in two studies, the opposite effect occurred wherein placements decreased more for African Americans than for Whites, thereby decreasing disparity. Thus, these findings could suggest that the impact of tools on disparity may depend on the tool and context” (p. 410).

 

Translating Research into Practice

“Although we found that tools might help reduce restrictive placements in some cases, our results highlight that agencies should not develop unrealistic expectations that tools are a panacea. In and of themselves, tools likely have only a modest impact on placement rates and recidivism. To have a strong and sustain- able impact, tools need to be implemented well with adequate staff and stakeholder buy-in, appropriate policies, and routine quality assurance practices. For instance, agencies should provide judges, probation officers, and other users with training on the RNR model and on how to use risk assessments in placement decisions” (p. 411).

“Prior to adopting a tool, agencies should pilot test the tool, and then continue to periodically reevaluate its use. This reevaluation is important because agencies can experience a combination of both ‘moving forward and slipping back.’ According to some authors, without ongoing reevaluation, risk tools might potentially even ‘become a straitjacket that binds the juvenile justice system to inappropriate use of detention.’ As we found through this review, some agencies are already making efforts to evaluate the impact of tools on placement decisions, which is commendable. However, much of this work consisted of brief unpublished reports that did not control for possible confounds. As such, agencies should work toward increasing the rigor of their research such as by pairing with academic researchers. Agencies should also take steps to disseminate their findings, including both positive and negative results. This willingness to identify and learn from challenges captures the spirit of evidence-based practice; evidence-based practice is not a one-shot implementation of a tool but instead, a commitment to ongoing review and refinement” (p. 411).

“In sum, our review indicates that although risk assessment tools are not a remedy to overincarceration, they might potentially help to reduce restrictive placements without increasing recidivism. In this respect, tools may help balance public safety and offenders’ liberty while presumably decreasing costs to the system. However, research is scarce, and many studies are poor in quality. Furthermore, it remains to be seen whether any potential benefits of tools come at a cost to social justice, and if so, under what circum- stances. As such, researchers and policymakers need to invest greater efforts into rigorously investigating these important questions” (p. 411).

Other Interesting Tidbits for Researchers and Clinicians

“One of the primary conclusions of this systematic review is that we need better research to determine how tools impact placement and recidivism rates, particularly studies that use rigorous designs such as randomized trials, staggered designs, and propensity score matched studies. However, this type of research is challenging to conduct. Many agencies have already implemented risk tools, making it difficult to find appropriate comparison groups. As such, in addition to conducting field studies, researchers could use carefully controlled experimental designs, such as case vignette studies, to examine how tools influence judges’ placement decisions when other factors are held constant. In addition, when agencies adopt tools for the first time or switch from one tool to another, researchers can take advantage of these naturally occurring experiments to test how these changes alter placement rates or recidivism” (p. 410).

Join the Discussion

As always, please join the discussion below if you have thoughts or comments to add!