Fighting for objectivity: Cognitive bias in forensic examinations - CONCEPT Professional Training

Fighting for objectivity: Cognitive bias in forensic examinations - CONCEPT Professional Training

Fighting for objectivity: Cognitive bias in forensic examinationsForensic evaluations are not immune to various cognitive biases, but there are ways to mitigate them. This is the bottom line of a recently published article in International Journal of Forensic Mental Health. Below is a summary of the research and findings as well as a translation of this research into practice.

Fighting for objectivity: Cognitive bias in forensic examinations

Featured Article | International Journal of Forensic Mental Health | 2017, Vol. 16, No. 3, [227–238]

Understanding and Mitigating Bias in Forensic Evaluation: Lessons from Forensic Science

Authors

Patricia A. Zapf, John Jay College of Criminal Justice
Itiel E. Dror, University College London

Abstract

Criticism has emerged in the last decade surrounding cognitive bias in forensic examinations. The National Research Council (NRC, 2009) issued a report that delineated weaknesses within various forensic science domains. The purpose of this article is to examine and consider the various influences that can bias observations and inferences in forensic evaluation and to apply what we know from forensic science to propose possible solutions to these problems. We use Sir Francis Bacon’s doctrine of idols—which underpins modern scientific method—to expand Dror’s (2015) five-level taxonomy of the various stages at which bias can originate within forensic science to create a seven-level taxonomy. We describe the ways in which biases can arise and impact work in forensic evaluation at these seven levels, highlighting potential solutions and various means of mitigating the impact of these biases, and conclude with a proposal for using scientific principles to improve forensic evaluation.

Keywords

Bias, cognitive bias, cognitive factors, forensic evaluation, forensic psychology

Summary of the Research

“Research and commentary have emerged in the last decade surrounding cognitive bias in forensic examinations, both with respect to various domains within forensic science […] as well as with respect to forensic psychology. […] Indeed, in 2009 the National Research Council (NRC) issued a 352-page report entitled, Strengthening Forensic Science in the United States: A Path Forward that delineated several weaknesses within the various forensic science domains and proposed a series of reforms to improve the issue of reliability within the forensic sciences. Prominent among these weaknesses was the issue of cognitive factors, which impact an examiner’s understanding, analysis, and interpretation of data.” (p. 227)

“While we acknowledge differences between the workflow and roles of various forensic science practitioners and forensic mental health evaluators, we also believe that there are overarching similarities in the tasks required between the forensic science and forensic mental health evaluation domains. Across these two domains, examiners and evaluators are tasked with collecting and considering various relevant pieces of data in arriving at a conclusion or opinion and, across both of these domains, irrelevant information can change the way an examiner/evaluator interprets the relevant data. Bias mechanism, such as bias cascade and bias snowball, can impact examiners in forensic science as well as in forensic psychology.” (p. 227–228)

“The purpose of this article is to examine and consider the various influences that can bias observations and inferences in forensic evaluation and to apply what we know from forensic science to propose possible solutions to these problems. […] We describe the ways in which biases can arise and impact work in forensic evaluation at these various levels, highlighting potential solutions and various means of attempting to mitigate the impact of these biases, and conclude with a proposal for next steps on the path forward with the hope that increased awareness of and exposure to these issues will continue to stimulate further research and discussion in this area.” (p. 228)

“Sir Francis Bacon, who laid the foundations for modern science, believed that scientific knowledge could only arise if we avoid factors that distort and prevent objectivity. Nearly 400 years ago, Bacon developed the doctrine of “idols,” in which he set out the various obstacles that he believed stood in the way of truth and science—false idols that prevent us from making accurate observations and achieving understanding by distorting the truth and, therefore, stand in the way of science. […] In parallel and in addition to Bacon’s four idols, Dror and his colleagues have discussed various levels at which cognitive factors might interfere with objective observations and inferences and contribute to bias within the forensic sciences. […] Here we present a seven-level taxonomy that integrates Bacon’s doctrine of idols with the previous work of Dror and colleagues on the various sources of bias that might be introduced, and apply these to forensic evaluation.” (p. 228)

“Forensic evaluation requires the collection and examination of various pieces of data to arrive at an opinion regarding a particular legal issue at hand. […] The common components of all forensic evaluations include the collection of data relevant to the issue at hand […] and the consideration and weighting of these various pieces of data, according to relevance and information source, to arrive at an opinion/conclusion regarding the legal issue being evaluated.” (p. 228–229)

“Forensic evaluation is distinct from clinical evaluation, which relies primarily on limited self-report data from the individual being evaluated. Forensic evaluation places great importance on collecting and considering third party and collateral information in conjunction with an evaluee’s self-report data, and forensic evaluators are expected to consider the impact and relevance of the various pieces of data on their overall conclusions. In addition, forensic evaluators are expected to strive to be as impartial, objective, and unbiased as possible in arriving at their conclusions and opinions about the legal issue at hand. […] Hence, it can be argued that forensic evaluations should aspire to be more similar to scientific investigations—where the emphasis is placed on using observations and data to test alternate hypotheses—than to unstructured clinical assessments, which accept an evaluee’s self-report at face value without attempts to corroborate or confirm the details of the evaluee’s account and with less emphasis on alternate hypothesis testing.” (p. 229)

“If we accept the premise that forensic evaluations should be more akin to scientific investigations than clinical evaluations, then forensic evaluators should conduct their work more like scientists than clinicians, using scientific methods to inform their conceptualization of the case and opinions regarding the legal issue at hand. […] We take the lessons from forensic science and apply these to forensic evaluation with the aim of making forensic evaluation as objective and scientific as possible within the confines and limitations of attempting to apply group-to- individual inferences. […] We do so by developing the framework of a seven-level taxonomy delineating the various influences that might interfere with objective observations and inferences, potentially resulting in biased conclusions in forensic evaluation. The taxonomy starts at the bottom with innate sources that have to do with being human. As we ascend the taxonomy, we discuss sources related to nurture—such as experience, training, and ideology—that can cause bias and, as we near the top of the taxonomy, the sources related to the specific case at hand. So, the order of the taxonomy is from general, basic innate sources derived from human nature, to sources that derive from nurture, and then to those sources that derive from the specifics of the case at hand. “(p. 229)

“At the very base of the taxonomy are potentially biasing influences that result from our basic human nature and the cognitive architecture of the brain. […] These obstacles or influences result from the way in which our brains are built […] The human brain has a limited capacity to represent and process all of the information presented to it and so it relies upon techniques such as chunking information (binding individual pieces of information into a meaningful whole), selective attention (attending to specific pieces of information while ignoring other information), and top-down processing (conceptually driven processing that uses context to make sense of information) to efficiently process information […] We actively process information by selectively attending to that which we assume to be relevant and interpret this information in light of that which we already know.” (p. 230)

“Ironically, this automaticity and efficiency—which serves as the bedrock for expertise—also serves as the source of much bias. That is, the more we develop our expertise in a particular area, the more efficient we become at processing information in that area, but this enhanced performance results in cognitive tradeoffs that result in a lack of flexibility and error. […] For example, information that we encounter first is more influential than information we encounter later. This anchoring bias can result in a forensic evaluator being overly influenced by or giving greater weight to information that is initially presented or reviewed. Thus, initial information communicated to the forensic evaluator by the referring party will likely be more influential, and serve as an anchor for, subsequent information reviewed by the evaluator.” (p. 230)

“We also have a tendency to overestimate the probability of an event or an occurrence when other instances of that event or occurrence are easily recalled. This availability bias can result in a forensic evaluator overestimating the likelihood of a particular outcome on the basis of being able to readily recall similar instances of that same outcome. Confirmation bias results from our natural inclination to rush to conclusions that confirm what we want, believe, or accept to be true. […] In the forensic evaluation domain, […] the confirmation bias can exert its influence on evaluators who share a preliminary opinion before the evaluation is complete by committing the evaluator in a way that makes it difficult to resist or overcome this bias in the final interpretation of the data. […] What is important is that we recognize our limits and cognitive imperfections so that we might try to address them by using countermeasures.” (p. 230–231)

“Moving up the taxonomy, the next three sources of influences that can affect our perception and decision-making result from our environment, culture, and experience. First among these are those influences that are brought about by our upbringing—our training and motivations […] Our personal motivations and preferences, developed through our upbringing, affect our perception, reasoning, and decision-making.” (p. 231)

“Closely related to an individual’s motivations are how one sees oneself and with whom that individual identifies. One particularly salient and concerning influence in this realm for forensic evaluators is that of adversarial allegiance; that is, the tendency to arrive at an opinion or conclusion that is consistent with the side that retained the evaluator. [The research shows that] forensic evaluators working for the prosecution assign higher psychopathy scores to the same individual as compared to forensic evaluators working for the defense. […] forensic evaluators assign higher scores on actuarial risk assessment instruments—known to be less subjective than other types of risk assessment instruments—when retained by the prosecution and lower scores when retained by the defense” (p. 231)

“In addition to the pull to affiliate with the side that retained the forensic evaluator is the issue of pre-existing attitudes that forensic evaluators hold and how these might impact the forensic evaluation process.” (p. 231)

“Language has a profound effect on how we perceive and think about information. The words we use to convey knowledge—terminology, vocabulary, and even jargon—can cause errors in how we understand and interpret information when we use them without attention and proper focus on the true meaning, or without definition, measurable criteria, and quantification. It is important to consider the meaning and interpretation of the words we use and how these might differ by organization, discipline, or culture. It is easy to assume that we know what someone means when they tell us something—whether it be an evaluee, a retaining party, or a collateral informant— but we must be cautious about both interpreting the language of others and using language to convey what we mean.” (p. 231–232)

“In the forensic assessment domain, different methods of conducting risk assessments (using dynamic risk assessment measures versus static risk assessment measures) have been demonstrated to affect the predictive accuracy of the conclusions reached by evaluators. […] highly structured methods with explicit decision rules and little room for discretion outperform unstructured clinical methods and show higher rates of reliability and less bias in the predicted outcomes.” (p. 232)

“Within existing organizational structures, using language with specific definition and meaning that serves to increase error detection and prevention is important for creating a more scientific discipline.” (p. 232)

“The ways in which forensic evaluators produce knowledge within their discipline can serve as an impediment to accurate observations and objective inferences. Anecdotal observations or information based on unsupported or blind beliefs can serve to create expectations about conclusions or outcomes before an evaluation is even conducted. Similarly, using methods or procedures that have not been adequately validated or that have been based on narrow, in-house research for which generalizability is unknown can result in inaccurate conclusions. Drawing inferences on the basis of untested assumptions or base rate expectations can lead to erroneous outcomes.” (p. 232)

“Perhaps one of the most potentially biasing considerations at the [level that deals with influences that result from information that is obtained or reviewed for a specific case but that is irrelevant to the referral question] involves the inferences made by others. […] Detailed information about an evaluee’s criminal history (offenses committed prior to the index offense), in most instances, is irrelevant to the issue of his or her criminal responsibility, which is an inquiry that focuses on the mental state of the individual at the time of the index offense. This irrelevant information, however, can become biasing for an evaluator. Even more potentially biasing can be the inferences and conclusions that others make about an evaluee—including collateral informants as well as retaining and opposing parties—since evaluators typically do not have access to the data or the logic used by others in arriving at these inferences and conclusions. […] It is naive to think that a forensic evaluator can only collect and consider relevant information, especially since many times it is not clear what is relevant and what is irrelevant until all collected materials have been reviewed; however, disregarding irrelevant information is nearly impossible.” (p. 233)

“Attempting to limit, as much as possible, the irrelevant information that is reviewed or considered as part of a forensic evaluation is one means of mitigating bias. Having a third-party take an initial pass through documents and records provided for an evaluation to compile relevant information for the evaluator’s consideration is one way of potentially mitigating against biasing irrelevant information. Another potentially mitigating strategy might be to engage in a systematic process of review where clear and specific documentation of what was reviewed, when it was reviewed, in the order in which it was reviewed, and with the evaluator detailing his or her thoughts, formulations, and inferences after each round of review, beginning with the most explicitly relevant case information (e.g., the police report for the index offense in a criminal responsibility evaluation) and moving toward the least explicitly relevant case information (e.g., elementary school records in a criminal responsibility evaluation).” (p. 233)

“Just as irrelevant case material can be biasing, so too can contextual information included in the reference materials for a forensic evaluation. […] reference materials would include whatever it is that the evaluator is supposed to be evaluating the evidence against and, of course, can include potentially biasing contextual information.” (p. 234)

“The reference materials also underpin the well-documented phenomenon of “rater drift,” wherein one’s ratings shift over time or drift from standard levels or anchors by unintentionally redefining criteria. This means that evaluators should be careful to consult the relevant legal tests, statutes, or standards for each evaluation conducted and no assume that memory for or conceptualization of the standard or reference material is accurate.” (p. 234)

“In addition to irrelevant case information and contextual information included as part of the reference materials for a case, the actual case evidence itself might also include some irrelevant, contextual, or biasing information. Here we conceptualize case evidence as information germane to the focus of the inquiry that must be considered by any forensic evaluator in arriving at an opinion about the particular legal issue. […] Influences at the case evidence level include biasing contextual information from the actual police reports or other data that must be considered for the referral question. Thus, contextual information that is inherent to the case evidence and that cannot be easily separated from it can influence and bias an evaluator’s inferences about the data.” (p. 234–235)

“Irrelevant or contextual information can influence the way in which evaluators perceive and interpret data at any of these seven levels—ranging from the most basic aspects of human nature and the cognitive architecture of the brain, through one’s environment, culture, and experiences, and including specific aspects of the case at hand—but it is important to note that biased perceptions or inferences at any of these levels do not necessarily mean that the outcome, conclusion, or opinion will be biased. […] Even if the bias is in a contradictory direction from the correct decision, the evidentiary data might affect the considerations of the evaluator to some extent but not enough to impact the actual outcome of the evaluation or ultimate opinion of the evaluator. What appears important to the outcome is the degree to which the data are ambiguous; the more ambiguous the data, the more likely it will be that a bias will affect the actual decision or outcome.” (p. 235)

“Consideration of the various influences that might bias an evaluator’s ability to objectively evaluate and interpret data is an important component of forensic evaluation. […] Knowledge about the ways in which bias can impact forensic evaluation is an important first step; however, the path forward also includes the use of scientific principles to test alternative hypotheses, methods, and strategies for minimizing the impact of bias in forensic evaluation. Using scientific principles to continue to improve forensic evaluation will bring us closer to the aspirational goal of objective, impartial, and unbiased evaluations.” (p. 236–237)

Translating Research into Practice

“The presence of a bias blind spot—the tendency of individuals to perceive greater cognitive and motivational bias in others than in themselves—has been well documented. […] forensic psychologists are occupationally socialized to believe that they can and do practice objectively (recall the discussion of training and motivational influences); however, emerging research on bias in forensic evaluation has demonstrated that this belief may not be accurate […] In addition, it appears that many forensic evaluators report using de-biasing strategies, such as introspection, which have been proven ineffective and some even deny the presence of any bias at all.” (p. 235)

“For forensic evaluation to advance and improve, we must behave as scientists. […] Approaching forensic evaluations like scientific inquiries and using rival hypothesis testing might place the necessary structure on the evaluation process to determine the differential impact of the various data considered.” (p. 235–236)

“Identifying weaknesses in forensic evaluation and conducting research and hypothesis testing on proposed counter measures to reduce the impact of bias will serve to improve the methods and procedures in this area. Being scientific about forensic evaluation and using scientific principles to understand and improve it appears to be a reasonable path forward for reducing and mitigating bias.” (p. 236)

“The need for reliability among evaluators (as well as by the same evaluator at different times—inter- and intra-evaluator consistency) is a cornerstone for establishing forensic evaluation as a science. By understanding the characteristics of evaluators—including training, culture, and experience—that contribute to their opinions we can begin to propose and study different ways of limiting the impact of these characteristics on objective observation and inferences in forensic evaluation.” (p. 236)

“Research has demonstrated that reliability improves when standardized inquiries are used for competence evaluation. […] Conducting systematic research on the methods and procedures used in forensic evaluation and the impact of these on evaluation outcomes and bias will ultimately allow for development of the most effective strategies for forensic evaluation.” (p. 236)

“Implementing professional training programs that address cognitive factors and bias in forensic evaluation and conducting systematic research on the impact of various training techniques for increasing understanding of these issues will likely improve the methods that forensic evaluators currently use to mitigate the impact of bias in their work. […] Understanding the most effective ways of training evaluators to perform forensic evaluations in a consistent and reliable way while limiting the impact of bias will allow for the implementation of best practices, both with respect to the evaluations themselves as well as with respect to training procedures and outcomes.” (p. 236)

Other Interesting Tidbits for Researchers and Clinicians

“[Sir Francis Bacon’s idols] were categorized into idola tribus (idols of the tribe), idola spectus (idols of the den or cave), idola fori (idols of the market), and idola theatric (idols of the theater).” (p. 228)

“Bacon makes the case that experiences, education, training, and other personal traits (the idola spectus) that derive from nurture, can cause people to misperceive and misinterpret nature differently. That is, because of individual differences in their upbringing, experiences, and professional affiliations, people develop personal allegiances, ideologies, theories, and beliefs, and these may “corrupt the light of nature” (p. 228)

“Bacon’s doctrine of idols distinguishes between idols that are a result of our physical nature (e.g., human cognitive architecture) and the ways in which we were nurtured (e.g., experiences), and those that result from our social nature and the fact that we are social animals who interact with others in communities and work together. The first two idols—those of the tribe and the den—result from our physical nature and upbringing respectively, whereas the others—those of the market and theater result from our social nature and our interactions with others.” (p. 228)

Join the Discussion

As always, please join the discussion below if you have thoughts or comments to add!