Are we restoring competency, competently?

Are we restoring competency, competently?

We have no collective empirical understanding of the utility of competency or traditional assessment instruments in the restoration context. This is the bottom line of a recently published article in the Journal of Forensic Psychology Research and Practice. Below is a summary of the research and findings as well as a translation of this research into practice.

Featured Article | Journal of Forensic Psychology Research and Practice | 2020, Vol. 20, No. 2, 134-162

An Attempted Meta-Analysis of the Competency Restoration Research: Important Findings for Future Directions


Gianni Pirelli, Ph.D., ABPP, Pirelli Clinical and Forensic Psychology, LLC
Patricia A. Zapf, Ph.D., Continuing and Professional Studies, Palo Alto University


The competence to stand trial literature is vast, whereas the literature on competency restoration pales in comparison. Although such research has accumulated since the 1970s, no quantative synthesis of it has been conducted. Therefore, we considered over 1,000 publications and attempted to conduct the first meta-analysis of restoration data – using 51 independent competency restoration samples published over a 40-year period (1975–2013) with 12,781 defendants – ultimately concluding that the restoration literature does not currently lend itself to meta-analysis. Still, several important findings arose: (a) the base rate for competency restoration was 81% and the median length of stay (LOS) was 147 days overall and 175 days in single restoration group studies; (b) reported competency restoration procedures were overwhelmingly nonspecific across studies and not reported in more than half of them; (c) most studies used correlational designs, with only five studies comparing restored with unrestored defendants, and no useable existing pre- versus post-treatment studies; and (d) competence assessment instruments were used in fewer than one-third of studies, traditional psychological measures were used in fewer than one-quarter, with data presented in such a way as to eliminate the potential for quantitative aggregation and analyses. Overall, our findings are concerning because, despite 40 years of research, the available data essentially highlight the characteristics of those engaging in restoration and do not address variables associated with final restoration status. In addition, we continue to have no collective empirical understanding of the utility of competency or traditional assessment instruments in the restoration context. Finally, virtually no published data reflect specific intervention efforts that lead to competence restoration.


Competency restoration; adjudicative competency; competency to stand trial; trial competence; forensic mental health assessment; meta-analysis

Summary of the Research

“We attempted to conduct a meta-analysis to provide psycholegal researchers and practitioners with a summary of the cumulative knowledge gained from 40 years of research in the competency restoration arena. Like other meta-analysts, our goal was to advance the state of knowledge in the field by testing hypotheses not previously tested in primary studies and those that cannot be tested by primary studies alone. Repetitive and ultimately uninformative studies may be conducted if a research literature is not meta-analyzed, as meta-analyses often serve as a new starting ground for research, practice, and policy in a given area of study. As such, findings from a meta-analysis in this area could have served to bring some types of competency restoration studies to an end, while prompting a series of new ones. However … this empirical literature does not lend itself to meta-analysis at this time and, therefore, there must be a collective adjustment to the research conducted in this area moving forward” (p. 138-139).

“The main objective of this study was to quantitatively synthesize the competence restoration research using contemporary meta-analytic methods and statistical procedures with the goal of addressing specific research questions, including which variables are related to a defendant’s competence restoration status and the utility of various traditional and competence assessment measures in evaluating restoration treatment programs. However, we were unable to actually conduct a meta-analysis because of the significant limitations of this research base; in fact, we were largely unable to analyze the data in any statistically meaningful way at all. Still, there are some noteworthy findings as well as implications for future research in this arena” (p. 151).

“First, the base rate for the restoration of competence was 81%, such that this was the percentage of defendants who had engaged in restoration procedures and were subsequently deemed competent to proceed” (p. 151).

“Second, the average LOS ranged from 42.7 to 1,108 days, with a median LOS of 146.9 days and an average LOS of 175 days once outlier data were removed. These LOS estimates and competency restoration rates are in-line with those that have been presented in various summaries of the research literature and represent important and relevant data for statutorily required predictions of competency restoration. For instance, evaluators should be able to make clear in their evaluation reports that approximately 8 out of 10 of all incompetent defendants will be restored to competency and returned to the court within six months” (p. 151-152).

“Third, we cannot speak in the aggregate (at least, quantitatively), about the demographic, psycholegal, criminological, and clinical variables that have been examined in relation to competence restoration since the 1970s. We attempted to investigate the most common of these variables marital status, employment status, psychiatric diagnosis, psychiatric hospitalization history, competency evaluation history, and current criminal charge – but again, the available research precludes a formal quantitative synthesis in this regard. While we can eyeball the data or otherwise speak descriptively about it and suggest that it seems to parallel many of the findings from Pirelli et al.’s (2011) meta-analysis in some respects, we can only do so very cautiously given our inability to conduct formally statistical analyses” (p. 152-153).

“Fourth, the same limitations apply to our ability to make conclusions about the utility of competence and traditional psychological assessment instruments. We were unable to formally investigate the relationship between these measures and competency restoration. The state of the extant restoration literature is such that we are unable to answer most of the important research questions. Thus, we are now at a point where we need to be more planful about the design of our research studies, so we can begin to address important areas of consideration that will serve to move our knowledge forward. At present, the only statement that we appear to be able to make is that we have no useful data, in aggregate, about the utility of competence and traditional psychological assessment measures in the competency restoration context” (p. 152).

“The overall findings from this attempted meta-analysis serve to underscore that the most important group comparisons – between defendants who have had their competency restored and those who have not – have essentially not been investigated in any meaningful way over the past half-century. Most of the empirical research in the competency restoration arena has either included single restoration groups or comparisons of restoration and competent samples. However, single group studies merely investigate variables associated with those undergoing competency restoration treatment and comparative studies usually include “pure” competent samples (i.e., those whose competency has never been in question). Such is consistent with what Heilbrun et al. (2019) concluded from their recent qualitative review of the restoration literature; namely, controlled studies are needed in this area. As they articulated it: ‘Until we have the benefit of considering data from such studies, we must rely on the descriptive and single- group studies that comprise most of the empirical literature in this area’ (p. 10)” (p. 153).

“With respect to the studies that have included ‘restoration’ samples, the issue here is that study participants may be involved in restoration procedures, but if the data is collected from these participants upon program entry or even during the course of their restoration programs, it cannot then be used to generalize the findings to restored persons. Put differently, the groups in these types of designs can only be considered “incompetent” rather than restored or not restored because of the timing of the data collection. The term “restored” should be used to characterize those who have been legally found (by a judge) to be restored to competency. Although the research studies we included in this attempted meta-analysis have provided some preliminary information, they are not able to address the most important inquiries in the restoration context, as these require comparisons between Restored and Not-Restored groups and the investigation of these groups using pre/post (time 1/time 2) designs. Few studies have actually employed such groups or designs, and those that have present limited data. Thus, we do not know appreciably more than what we have already learned from the research comparing competent and incompetent defendants despite 40 years of research and commentary pertaining to competency restoration. This is largely because the studies conducted in the restoration arena were generally run in much of the same manner as those in the literature comparing incompetent and competent samples” (p. 153-154).

“As for competency assessment and traditional psychological assessment instruments, there is no way to know how many times they have actually been incorporated into research studies and not reported, or how much associated unpublished data exists. Nevertheless, empirical knowledge is based on the available research literature; thus, we conclude that there is insufficient empirical evidence to speak to the utility of traditional measures or competency assessment instruments in the context of competency restoration at this time. Competency assessment instruments, as a class of forensic assessment instruments, have been developed to address specific psycholegal questions related to the construct of adjudicative competency; therefore, they are conceptually appropriate for use in competency evaluations. Similarly, psychological assessment measures are intended to facilitate clinical decision making and, as such, they almost certainly have a (theoretical) place in the competency evaluation process. Unfortunately, however, even books devoted to the use of traditional measures in forensic evaluation provide virtually no guidance in this regard, especially in the context of restoration procedures. As such, the question should not necessarily be whether traditional or competency assessment measures should be used in the competency restoration context but rather, when and how. There is essentially no empirical research literature to draw upon regarding the use and utility of either competency or traditional assessment measures in the context of competency restoration. Although these data exist in some form and at some level, they have not been published in a manner conducive to aggregation and subsequent application. This is certainly an important area for future consideration” (p. 154).

Translating Research into Practice

“The present study reflects an attempted quantitative synthesis of approximately 40 years of competency restoration research and represents the current state of knowledge with respect to the correlates of competency restoration across various demographic, psycholegal, criminological, and clinical variables. Despite the relatively large total sample size of 12,781 defendants, we learned (quantitatively) little from this attempted meta-analysis aside from confirming both the estimated base rate of competency restoration being at or around 80% and the oft-cited timeline for restoration of most defendants being within six months. Frankly, there is no other compelling, aggregated data to report. However, we have learned much about the problematic state of this literature and where we need to go from here” (p. 154-155).

“In short, the current state of the competence restoration literature is dismal. Much of the available research has used single group correlation designs and has failed to adequately describe the details and procedures used for competency restoration. Research comparing competent defendants (especially those who have never been found incompetent) with those who have been success- fully restored is of limited clinical utility. We need more research that examines incompetent defendants at various stages of restoration. In addition, we need more research that uses true pre- and post-restoration design. That is, research that uses samples of incompetent defendants and examines differences between those who are and are not restored at various points in time and after various restoration strategies. This research will be of more clinical utility in this context than simply further investigation into the differences between incompetent and competent defendants, especially those whose competency has never been in question (i.e., pure competent comparison groups)” (p. 155).

“In addition, researchers must report and present data in a way that will allow for future meta-analyses. Many of the published restoration studies do not have data presented in a manner that is accessible to other researchers. This same issue of insufficient and problematic data reporting was also highlighted with respect to the adjudicative competence literature, more generally. After conducting their meta-analysis, Pirelli et al. (2011) published guidelines to assist researchers in publishing results in a way that would make their data accessible for future analyses. These same guidelines are applicable to the competence restoration literature. However, we can set forth at least five additional restoration-specific recommendations for researchers moving for- ward in this area. Namely, competency restoration researchers should:

(i) use the legal finding as the operational definition of ultimate competency restoration status;
(ii) clearly explain the procedures employed, especially detailing the treatment interventions used – therapeutically, pharmacologically, and otherwise;

(iii) focus on designing and conducting only two types of studies – either retrospective analyses of Restored versus Not Restored groups or prospective pre/post (time 1/time 2) studies of those in restoration programs);

(iv) incorporate both competency assessment instruments and traditional psychological measures, when possible, to contribute to the development of an empirical literature that compares scores on these measures between those who are restored and those who are not; and

(v) conduct an updated meta-analysis within the next few years, as our study only included studies published between 1975–2013” (p. 155-156).

Other Interesting Tidbits for Researchers and Clinicians

“There have certainly been papers published since 2013 that deserve consideration at this time and by meta-analysts moving forward, some of which focus on the use of assessment measures with restored groups. It is worth adding to this last point regarding the data collection period. Namely, every study must have parameters and start and end points, although some may not understand that a meta-analysis is procedurally similar to single studies in many ways. For instance, there are prescribed data collection start and end dates in prospective research studies as well as specific time periods whereby certain charts are pulled in archival studies. This is particularly relevant to remember in the restoration arena because, presumably, a certain number of people found “not restored” go on to be restored at some point after the data collection ends – and we simply lose that information. For this study, we attempted to conduct a quantitative synthesis of nearly 40 years of empirical research, to include the consideration of many publications and ultimately coding and analyzing over 50 independent study samples. We have readily acknowledged studies have been published since 2013 and, while adding a qualitative synthesis of this more recent research would be ostensibly useful, such would actually run conceptually counter to the point of conducting a meta-analysis in the first place. As such, adding a qualitative synthesis to a quantitative synthesis would only detract from our findings. This study was an attempted quantitative synthesis of the empirical competency restoration literature published between 1975–2013. It is our hope that our efforts will prompt researchers to conduct competence restoration research that can be aggregated in the future, thereby informing practice and policy” (p. 156).

Join the Discussion

As always, please join the discussion below if you have thoughts or comments to add!

Looking for training? Here are a few suggestions:

Authored by Amanda Beltrani

Amanda Beltrani is a doctoral student at Fairleigh Dickinson University. Her professional interests include forensic assessments, professional decision making, and cognitive biases.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.