The Space Between: Discrepancies Between Ideal and Actual Practice in Mental Status Evaluations

The Space Between: Discrepancies Between Ideal and Actual Practice in Mental Status Evaluations

The results of a mixed methods study indicate that, in a sample of 99 forensic evaluators, actual practice of mental status examinations is generally aligned with ideal practice, although some discrepancies are also identified. This is the bottom line of a recently published article in The Journal of Forensic Psychology Research and Practice. Below is a summary of the research and findings as well as a translation of this research into practice.

Featured Article | Journal of Forensic Psychology Research and Practice | 2021, Vol. 21, No. 5, 417-437

Discrepancies between Ideal and Actual Mental State at the Time of the Offense Evaluation Practices

Authors

Lauren T. Meaux, The University of Alabama
Jennifer Cox, The University of Alabama
Caroline Titcomb Parrot, Comprehensive Forensic Consulting, LLC

Abstract

Evaluator judgments of defendants’ mental state at the time of the offense (MSO) can influence the trier of fact and have implications for fairness and justice; however, current practices, and their alignment with best practice guidelines, are effectively unknown. The limited existing literature indicates that there are some substantive differences between practice recommendations for MSO evaluations and how they are conducted in practice. The current mixed methods study expanded those findings by revealing several discrepancies among how evaluators endorsed certain collateral data sources, clinical interview topics, and psychological and forensic assessment tools in an ideal evaluation scenario and how those ratings compared to their actual practices, as well as identified the justifications provided for any discrepancies. Overall, results suggest that actual practices are generally aligned with reported ideal practices; however, some discrepancies exist. We discuss these discrepancies in relation to existing ethical and specialty guidelines and propose practice recommendations. In order to protect against potentially biasing information, evaluators are encouraged to institute safeguards when communicating with a defendant’s attorney, implement a systematic review process, and scrutinize their current clinical interviews. Additionally, clinicians should be aware of all measures relevant to the psycho- legal construct and may consider requesting further data sources.

Keywords

MSO evaluations, mental state at the time of the offense, forensic psychological practice, criminal responsibility, ethics

Summary of the Research

“When an offender’s criminal responsibility is questioned and he or she raises a Not Guilty by Reason of Insanity (NGRI) defense, the defense, prosecution, or the court may retain forensic evaluators to assess the defendant and opine as to his or her mental state at the time of the offense [MSO]…Previous research shows a considerable amount of variability in evaluators’ MSO opinions…Even with best practice guidelines in place, it is effectively unknown how current MSO evaluation practices compare to recommended practices. However, based on the limited research regarding this topic, there is a reason to believe discrepancies exist between ideal and actual MSO evaluation practices…Furthermore, the reason(s) for such discrepancies between ideal MSO evaluation practices and actual practices is currently unknown…” (p. 418-420).

“…the current study aimed to expand our knowledge of MSO evaluation practices by quantitatively identifying instances of significant discrepancy between the value forensic evaluators assign to certain clinical interview topics, sources of collateral data, and psychological and forensic assessment tools and the frequency with which they are actually used…Additionally, to examine the reasons why these discrepancies exist, the current survey employed…a qualitative response section supplementing participant quantitative ratings…we generally hypothesized some discrepancies between ideal practice ratings and actual practice ratings would emerge for individual items in an MSO evaluation…Participants included 99 forensic evaluators…at the time of completing study materials, participants had practiced psychology for an average of 16.87 years (SD = 11.78 years, Range = 1-48 years)…” (p. 420-421).

“Regarding participant ratings of types of collateral information, [analyses] indicated significantly lower ideal practice ratings than actual practice ratings for an interview with the defendant’s lawyer and previous arrest records, indicating these data are sourced more than what is considered necessary…Participants reported that prior arrest records are used more often than their potential probative value would suggest, which 38% of evaluators indicated was because they are easily accessible or provided as part of discovery…Regarding clinical interview topics, [analyses] indicated significantly lower ideal practice ratings than actual practice ratings for criminal history, education history, medical history, social history, employment history, and a current mental status assessment…These results indicate that evaluators use these clinical interview topics in practice more than their probative value would suggest…” (p. 425-428).

“Although mental status exams are ‘very frequently’ performed during MSO evaluations, participants did not endorse them as commensurately informative. This latter finding could be due in large part to the retrospective nature of MSO evaluations…Regarding psychological and forensic assessment tools, [analyses] indicated significantly lower actual practice ratings than ideal practice ratings for several measures…meaning evaluators would ideally use them more than they actually do…Although rated as more informative than its use in practice would suggest, the Rogers Criminal Responsibility Assessment Scales (R-CRAS; Rogers, 1984) is still ‘rarely…’ used and considered ‘optional’…Neuropsychological tests also received higher ratings in the ideal portion of the survey compared to their use in actual practice. Thirty-three percent of participants stated they do not consider themselves competent and instead these measures should be given by neuropsychologists or those with specialized training in neurology and psychology…Participants rated forensically relevant measures (e.g., Historical Clinical Risk Management – 20…or Psychopathy Checklist – Revised…) as more informative than their use in practice would suggest, perhaps because they provide information relevant to other potential referral questions…” (p. 429-432).

Translating Research into Practice

“In light of these discrepancies, the following practice recommendations may assist in balancing ethical considerations with real-world practice: (1) institute safeguards against biasing information when communicating with a defendant’s attorney…(2) systematically review one’s opinion formation with frequent documentation of the opinion and potentially influencing/biasing information…(3) consider asking the court or referral source for Miranda documentation, if applicable; (4) scrutinize clinical interview topics for informative value and when appropriate, consider inclusion on a situational basis; (5) become and remain aware of relevant measures, such as FAIs, in order to make an informed judgment on their inclusion or exclusion…and (6) take steps to protect against potentially biasing information for the evaluation and/or trier of fact in combined evaluation (i.e., dual referral) cases, specifically. For example, the evaluator may perform an MSO evaluation and risk evaluation sequentially, instead of congruently. In addition, the evaluator may choose to bifurcate combined evaluation reports to ensure the trier of fact is only receiving the information necessary to address the specific psycholegal question at hand” (p. 434).

Other Interesting Tidbits for Researchers and Clinicians

“The practice of inquiring about all clinical interview topics (even if they are not likely to divulge relevant information) could be considered perfunctory at best, and in a worst-case scenario, may lead to eliciting pejorative or biasing information that is at most only tangentially related to the psycholegal question. While specialty guidelines specify evaluators should guard against knowingly collecting personal information about the defendant that is not likely to inform a psycholegal opinion (APA, 2013), it could be difficult, if not impossible, to know if or how certain historical data will inform an MSO opinion in advance, and an evaluator must strike a balance with efforts to include pertinent data sources…” (p. 428-429).

“…Responses indicated the optional use of the R-CRAS may depend on the evaluator’s experience level. Some participants (29%) suggested the R-CRAS may be useful for beginning evaluators as it provides a framework for collecting and organizing data…these findings raise questions about whether there is a trickle-down effect, wherein experienced evaluators who are most likely to supervise clinical training do not use the R-CRAS and, thus, do not expose less experienced supervisees to the measure. Even if one does not routinely use a measure, ethical test selection supposes that the examiner makes oneself aware of empirically supported measures available to assess the pertinent constructs, including pros can cons to their use, and to base the inclusion of that instrument accordingly (APA, 2013)…” (p. 431).

Join the Discussion

As always, please join the discussion below if you have thoughts or comments to add!