Anchoring effects might have undesirable consequences, possibly making court rulings biased or erratic.
Numeric decisions in law (such as damages or prison terms) are susceptible to the effect of salient numbers present in the decision context. However, the effect might be moderated by a number of factors, which might be used by lawmakers to limit the inﬂuence of undesirable anchors or by attorneys to calibrate their demands. This is the bottom line of a recently published article in Law and Human Behavior. Below is a summary of the research and findings as well as a translation of this research into practice.
Featured Article | Law and Human Behavior | 2021, Vol. 45, No. 1, 1-23
Piotr Bystranowski, Jagiellonian University
Bartosz Janik, University of Silesia in Katowice
Maciej Próchnicki, Jagiellonian University
Paulina Skórska, Jagiellonian University
Objective: We conducted a meta-analysis to examine whether numeric decision-making in law is susceptible to the effect of (possibly arbitrary) values present in the decision contexts (anchoring effect) and to investigate which factors might moderate this effect. Hypotheses: We predicted that the presence of numeric anchors would bias legal decision-makers’ judgment in the direction of the anchor value. We hypothesized that the effect size of anchoring would be moderated by several variables, which we grouped into three categories: methodological (type of stimuli; type of sample), psychological (standard vs. basic paradigm; anchor value; type of scale on which the participants assessed the target value), and legal (relevance of the anchor; type of the anchor; area of law to which the presented case belonged; presence of any salient numeric values other than the main anchor). Method: Twenty-nine studies (93 effect sizes; N = 8,549) met the inclusion criteria. We divided them into two groups, depending on whether they included a control group, and calculated the overall effect size using a random-effects Model with robust variance estimation. We assessed the inﬂuence of moderators using random effects meta regression. Results: The overall effect sizes of anchoring for studies with a control group (z = .27, 95% CI [.21, .33], d = .58, 95% CI [.44, .73]) and without a control group (z = .39, 95% CI [.31, .47], d = .91, 95% CI [.69, 1.12]) were both signiﬁcant, although we provide some evidence of possible publication bias. We found preliminary evidence of a potential moderating effect of some legally relevant factors, such as legal expertise or the anchor relevance. Conclusions: Existing research indicates anchoring effects exist in legal contexts. The inﬂuence of anchors seems to depend on some situational factors, which paves the way for future research on countering the problematic effect in legal settings.
anchoring effect, meta-analysis, legal decision-making, judges and juries
Summary of the Research
“Theoretical models for the process of generating numerical legal decisions (e.g., the amount of damages or the length of prison term) suggest that this process is typically distorted by salient but not necessarily relevant numerical values present in the decision contexts—also known as anchors. Indeed, the experimental research conducted over the last 30 years seems to clearly indicate that legal decision-makers anchor on salient (but possibly arbitrary) numbers present in the decision environment when rendering numerical decisions in a variety of legal cases. This points to the risk that numerical verdicts are not issued in a predictable and unbiased way, which in turn may lead to a number of socially undesirable consequences” (p. 2).
“The anchoring effect refers to a situation in which a decision-maker, having been asked to estimate a certain numerical value, tends to ground that value on the ﬁrst (and/or salient) numerical value they encounter. Since the seminal work by Tversky and Kahneman (1974), the anchoring effect has been found to inﬂuence numerical estimation tasks in a number of domains, such as negotiations, estimates of product prices, self- efﬁcacy, forecasting, and various general knowledge questions” (p. 2).
“The anchoring effect in legal decision-making seems to be a robust effect, as we estimated large and signiﬁcant overall effect sizes even after having to split analyzed studies into two separate groups on methodological grounds. However, we also obtained evidence suggesting that research in this area might suffer from a publication bias, which hints that more further studies are needed before we can declare that legal anchoring is indeed a robust phenomenon. Although our moderator analysis largely did not yield signiﬁcant results, the obtained pattern of results indicates the possible moderating effect of some legally important factors (such as legal expertise, the relevance of the anchor, or the anchor value), which will hopefully also lead to further research in this area. Moreover, the results of our analysis provide grounds for criticizing the two dominating theories of anchoring: scale distortion theory (i.e., because providing participants with a bound scale did not lead to decreased effect size) and selective accessibility model (e.g., studies using the standard paradigm did not lead to larger effect sizes)” (p. 19).
Translating Research into Practice
“When it comes to implications for legal practice, the crucial issue is whether the parties should set their demands as high as possible. It seems that the dependence between the anchor value and effect size has its maximum, so the absurdly high anchors may not work as intended. However, even if the strength of the effect is somewhat diminished when an anchor is set very high, the assimilation persists. Therefore, setting the demand amount high (but not absurdly high) provides a solid advantage for the party initiating the proceedings. Despite the fact that, in the criminal context the effects are somewhat weaker than in civil proceedings, the pattern is similar. Can the defendant effectively respond by setting a counteranchor? Counteranchors appear to work to some extent, yet seem unable to substantially diminish the effect of the main anchor. It appears that the right to the ﬁrst move favors the party initiating the proceedings” (p. 19).
“Another consideration concerns whether the lawmakers can balance the scales, proving a level playing ﬁeld for both parties to legal proceedings. The typical tool used to achieve this is to introduce damage caps. Our analysis corroborated the thesis that caps themselves work as anchors and, therefore, might work in a different way than intended (potentially lifting the damage amount instead of lowering it). On the other hand, debiasing methods seemed to limit the effect, but the data on their usage remain scarce, and it would be premature to base the policy on this assumption. Some more structural solutions to address this problem may be applicable. For instance, a promising proposition would be to provide the decision-maker with some values that would serve as meaningful and relevant anchors—such as the mean amount of damages awarded in a given type of case. The potential positive role of some kinds of anchors was recently underlined by Helm and colleagues (2020). As shown by our synthesis, multiple salient numbers present in the decision context seem to limit the anchoring” (p. 19).
“Despite decades of research on anchoring in legal decision-making, a number of important issues remain unsolved, as we still do not have a single psychological theory encompassing all the complexities of this phenomenon. The most important directions for future research are related to the role which numbers play in the decision-making context: Which numbers serve as anchors in decision-making? and What is the role of the saliency of the number, and how does it drive the effect? After solving more theoretical controversies about the cognitive basis of the effect, methods of debiasing should be developed, as limiting the role of irrelevant or biased anchors is crucial to guarantee the fairness of legal proceedings” (p. 19).
Other Interesting Tidbits for Researchers and Clinicians
“It is of utmost importance that numerical legal rulings are issued in a biased and unpredictable way. The existing literature has suggested that the prevalence of the anchoring effect in legal contexts might be a major obstacle for achieving those goals (although one should remember that some anchors might have a positive role in legal decision-making). This synthesis of the experimental research provides further evidence suggesting the robustness of this effect in legal contexts. The effect of anchoring remains signiﬁcant independently of the employed research design or branch of law used to construct stimuli. Its relative strength derives from its meaning, as some anchors are more potent than others—especially the initial demand of the party, which reveals a troubling effect from the viewpoint of the fairness of the proceedings” (p. 18).
“The results of the moderator analysis point to some potential directions of further research… One example is the effect of the anchor’s relevance to the case to be decided: It remains a much-understudied topic, whereas the present study clearly indicates it has an impact on the effect size. Assessing the impact of relevance could also help to elucidate anchoring mechanisms in situations in which more salient values are present. Our review showed the scarcity of studies dealing with such issues as the effect of the inadmissibility of the values that are materially relevant and meaningful. If the saliency of the anchor plays a role here, exclusionary rules may increase it. The categories of inadmissibility and meaningfulness and their combinations may be only a germ for future studies, and the current results possibly justify the hope for an outcome that would be less worrying from the legal viewpoint” (p. 18-19).
“Another example of a moderator worth further research is the susceptibility of professional legal decision-makers to anchoring. The common wisdom in the ﬁeld is that they do not perform signiﬁcantly better than laypeople…A wholly new direction of research would be comparing the relative strength of anchoring in different branches of law, and this meta-analysis provides a good reason to believe that the effect might operate differently in different legal contexts. Last but not least, an important methodological issue arises from the present synthesis: the substantial effect of the method by which participants are familiarized with the case. To account for the pattern of results described here and to decide which experimental setting leads to a greater ecological validity is not only crucial to research on anchoring in the courtroom but could also prove important to experimental research on legal decision-making in general, as well as shedding some light on the cognitive basis of anchoring mechanisms” (p. 19).
“Strikingly, in the present study we did not analyze the potentially moderating effect of employing some debiasing techniques, such as Larrick’s (2004) cognitive (training in biases, training in representations, etc.) or motivational (accountability, etc.) strategies. Such analysis would be worthwhile, especially because there are at least two methods of debiasing anchoring whose effectiveness in legal contexts has found some corroboration: making individuals aware of existence and potential inﬂuence of anchoring; and making them aware of their accountability for assessing the target value accurately. Another potential debiasing measure (that could be classiﬁed as a case of training in representations) employed in the studies was changing the format of the anchor. In legal settings, this might be achieved by comparing anchors in the form of lump sum and per diem. However, because of the small number of studies employing any of these debiasing measures, we were forced not to engage with this topic in the moderator analysis. Nevertheless, this suggests the urgent need for further research in this area, as effective debiasing techniques could prove invaluable in legal settings” (p. 19).
Join the Discussion
As always, please join the discussion below if you have thoughts or comments to add!
Looking for training? Here are a few suggestions:
Check out these additional resources!
Authored by Amanda Beltrani
Amanda Beltrani is a doctoral student at Fairleigh Dickinson University. Her professional interests include forensic assessments, professional decision making, and cognitive biases.