Skip to content

Bias in Forensic Mental Health Evaluations

Bias in Forensic Mental Health Evaluations

Psychology, Public Policy, and LawCognitive biases that affect forensic mental health evaluations are reviewed and suggestions are provided for reducing the impact of these biases in a recently published article in Psychology, Public Policy and Law. Below is a summary of the article and suggestions for applying these principles in forensic practice.

Featured Article | Psychology, Public Policy, and Law | 2014, Vol. 20, No. 2, 200-211

The Cognitive Underpinnings of Bias in Forensic Mental Health Evaluations

Author

Tess M. S. Neal, University of Massachusetts Medical School
Thomas Grisso, University of Massachusetts Medical School

Abstract

We integrate multiple domains of psychological science to identify, better understand, and manage the effects of subtle but powerful biases in forensic mental health assessment. This topic is ripe for discussion, as research evidence that challenges our objectivity and credibility garners increased attention both within and outside of psychology. We begin by defining bias and provide rich examples from the judgment and decision-making literature as they might apply to forensic assessment tasks. The cognitive biases we review can help us explain common problems in interpretation and judgment that confront forensic examiners. This leads us to ask (and attempt to answer) how we might use what we know about bias in forensic clinicians’ judgment to reduce its negative effects.

Keywords

bias, judgment, decision, forensic

Summary of the Research

In this article, Neal & Grisso “apply information from multiple domains of psychological science (i.e., cognitive, social, methodological, clinical) to identify and better understand bias in forensic mental health assessment. This topic is ripe for discussion as several studies have investigated potential bias in the work of forensic experts” (p. 200). Three major heuristics—representativeness, availability, and anchoring—are reviewed and examples of the ways in which each might influence the judgment and decision-making of forensic evaluators.

“Forensic evaluators are asked to gather comprehensive data with regard to the referral issue, to analyze that patterns and interrelationships among the various pieces of data (called configural analysis), and then interpret the data to reach an opinion that will assist the trier-of-fact…However, human brains do not have an endless capacity for processing information. Simon (1956) called this constraint ‘bounded rationality’: we do the best we can within the design of our cognitive machinery. As a consequence, people often use cognitive shortcuts or simplifying strategies to manage cognitive load” (p. 201).

Three common shortcuts used to simplify information—heuristics—that we use on a regular basis include representativeness, availability, and anchoring.

Representativeness Heuristic

“The representativeness heuristic is a mental shortcut in which the subjective probability of an event or sample is estimated based on its similarity to a class of events or a typical specimen” (p. 202). The representativeness heuristic can easily lead one to neglect important information, such as base rates.

“In clinical contexts, experts often underutilize or ignore base rate information and tend to rely instead on case specific information…The problem with this practice is that salient but less predictive case-specific information can draw the clinician’s attention away from the relevant base rates and have the adverse effect of decreasing accuracy. Base rates are critical and should be part of a forensic evaluator’s thinking process whenever possible” (p. 204).

Availability Heuristic

“The availability heuristic refers to the ease with which one can recall other examples of the event in question, which increases the likelihood of an interpretation…Other factors that increase availability are frequency and salience” (p. 204).

“Clinicians may make conclusions based on inadequately formed hypotheses, they may not gather the necessary data needed to adequately test their hypotheses, and they may seek and rely mainly or exclusively on information that confirms their [initial] ‘hunch’…all of [which] can have subtle effects that set up the potential for confirmation bias” (p. 204).

“Confirmation bias may also occur as a result of sharing a preliminary opinion before the evaluation is complete. For instance, forensic mental health clinicians might be asked to answer questions about the way they are “leaning” in a case based on their initial interpretation of partially collected data (by retaining parties, supervisors, colleagues). Answering such questions prematurely commits the examiner in a way that makes it more difficult to resists confirmation bias when completing the final interpretation of one’s data” (p. 205).

Anchoring Effect

“The anchoring effect is a cognitive phenomenon in which we are overly influenced by initial information encountered. Anchoring, akin to priming and the halo effect, increases the weight of firth impressions sometimes to the point that subsequent information is mostly wasted. The sequence in which we encounter information is often determined by chance, but it matters” (p. 205).

“Any forensic evaluator might hear a compelling story told by the first person interviewed and begin to formulate hypotheses about the case, only to hear a different (and perhaps contradictory) story later from another party that might be just as coherent and compelling. Unfortunately, people often have difficulty sufficiently adjusting an original hypothesis based on information encountered later. The evaluator must somehow make sense of the contradictory information and beware the anchoring effect of the information from the first party the evaluator happened to interview” (p. 205).

Translating Research into Practice

In essence, this entire article is a review of research and a translation of that into practice, or at least to the pitfalls of practice. The authors present a number of things for forensic evaluators to think about…as well as some research ideas of teasing apart these important issues.

“One of the challenges of this work will be delineating the elements of the ‘forensic evaluation process’ where biases and errors may exert an effect. There may be many ways to construe the process, but here we offer a simple one to provide an example. The forensic evaluation process begins with a referral question that guides the evaluation. The evaluation then includes, (a) selection of types of data to collect, (b) collection of the data, (c) analysis of the data, (d) interpretation of the data to formulate a forensic opinion. These domains within the evaluation process might allow us to discover how bias works in various ways associated with the steps in the process. Considered respectively, they offer the potential to determine how biases (a) narrow or expand our search for relevant data, (b) influence the quality and integrity of the data that we collect, (c) influence how we score or subjectively classify the data we have obtained, and (d) influence how we combine the data when testing our hypotheses and their alternatives regarding the answer to the forensic question” (p. 207).

“One specific debiasing strategy that has a good chance of being useful for forensic clinicians is locating and then keeping in mind relevant base rates…Kahneman (2011) suggests that the corrective procedure is to develop a baseline prediction, which you would make if you knew nothing specific about the case (e.g., find the relevant base rate). Second, determine whether the base rate matches your clinical judgment about the case. When thinking about your clinical judgment, always question the strength of the evidence you’ve gathered (How sound is the evidence? How independent are the observations? Don’t confuse correlation with causation, etc.). Then aim for a conclusion somewhere between the baseline prediction and your clinical judgment (and stay much closer to baseline if the evidence underlying the clinical judgment is poor). Clinicians often tend to exaggerate the persuasiveness of case-specific information. Therefore, anchoring with a base rate and then critically evaluating the strength of the case-specific diagnostic information are offered as recommendations to combat the representativeness heuristic and overconfidence.

Another debiasing strategy is to ‘consider the opposite.’ This strategy may be particularly useful for forensic clinicians, given the adversarial nature of courtroom proceedings. Expert witnesses testify through direct- and cross-examination. Imagining how one’s assessment methods, data, and interpretation will be scrutinized during cross-examination is often recommended as a trial preparation strategy. Part of that consideration is recognizing that the opposite side will not merely attack the proof for the clinician’s opinion, but might also pose alternative interpretations and contradictory data, asking the clinician why they were rejected” (p. 207).

Other Interesting Tidbits for Researchers and Clinicians

The authors present an interesting “thought experiment” whereby they imagine an adversarial system where evaluators were free to present the most favorable interpretation of the data for the party by which they were retained. The entire article is definitely worth the read!

Join the Discussion

As always, please join the discussion below if you have thoughts or comments to add!