Skip to content

Roles, Risks, & Regulation of AI in Forensic Psychiatry

Roles, Risks, & Regulation of AI in Forensic Psychiatry

Featured Article

Frontiers in Psychiatry | 2024, Vol. 15, 1346059.

Article Title

Beyond Discrimination: Generative AI Applications and Ethical Challenges in Forensic Psychiatry

Authors

Leda Tortora - School of Nursing and Midwifery, Trinity College Dublin, Dublin, Ireland

Abstract

The advent and growing popularity of generative artificial intelligence (GenAI) holds the potential to revolutionise AI applications in forensic psychiatry and criminal justice, which traditionally relied on discriminative AI algorithms. Generative AI models mark a significant shift from the previously prevailing paradigm through their ability to generate seemingly new realistic data and analyse and integrate a vast amount of unstructured content from different data formats. This potential extends beyond reshaping conventional practices, like risk assessment, diagnostic support, and treatment and rehabilitation plans, to creating new opportunities in previously underexplored areas, such as training and education. This paper examines the transformative impact of generative artificial intelligence on AI applications in forensic psychiatry and criminal justice. First, it introduces generative AI and its prevalent models. Following this, it reviews the current applications of discriminative AI in forensic psychiatry. Subsequently, it thoroughly explores the potential of generative AI to transform established practices and introduce novel applications through multimodal generative models, data generation and data augmentation. Finally, it provides a comprehensive overview of ethical and legal issues associated with deploying generative AI models, focusing on their impact on individuals as well as their broader societal implications. In conclusion, this paper aims to contribute to the ongoing discourse concerning the dynamic challenges of generative AI applications in forensic contexts, highlighting potential opportunities, risks, and challenges. It advocates for interdisciplinary collaboration and emphasises the necessity for thorough, responsible evaluations of generative AI models before widespread adoption into domains where decisions with substantial life-altering consequences are routinely made.

Keywords

forensic psychiatry, forensic AI, generative AI, generative artificial intelligence, discriminative AI, ethical AI, large language models, large generative AI models

Summary of Research

“In forensic psychiatry and criminal justice, discriminative [AI] models were developed to assist forensic psychiatrists and legal professionals in assessment and decision-making processes… Discriminative AI models have been developed to evaluate and predict the likelihood of violence, recidivism, or other unlawful or harmful outcomes in individuals with a psychiatric or criminal history” (p. 3).

“Misuses of this technology, especially in the fields of forensic psychiatry and criminal justice, might result in significant harm…[which] can have a profound impact on stakeholders, particularly when integrated into systems used in the forensic domain…

Forensic psychiatric patients are a population already facing high levels of stigmatisation… negative stereotypes are used to justify, legitimise and promote legal restrictions and discriminatory practices… Generative AI models have the potential to worsen these consequences significantly, exacerbating disproportionate criminalisation of marginalised groups, perpetuating stigmatising attitudes and reinforcing harmful links between mental health and social dangerousness….

The need for transparency and accountability, especially following the widespread adoption of generative AI models, calls for the creation of a regulatory framework tailored to respond to the dynamically changing AI landscape and to address not only the technical aspects but also the broader ethical, societal, and economic implications, promoting their responsible and ethical use while favouring critical enquiries on issues related to responsibility, accountability, and labour exploitation” (p. 6- 7).

“Moreover, the growing probability of encountering AI-generated evidence in courtrooms is likely to instil a sense of doubt and scepticism amongst judges, juries and the general public, fostering an environment where all parties are inclined to consider the possibility that their counterparts have submitted AI-generated evidence… which ultimately pollutes the decision-making processes…In forensic psychiatry, where research suggests that juries and judges tend to misinterpret scientific evidence in court … the potential introduction of genAI-fabricated evidence introduces the risk of wrongful convictions grounded in maliciously AI-generated scientific evidence…” (p. 9).

“...It is imperative to integrate diverse perspectives and voices at every stage of the AI process, from dataset creation and curation to model development and utilisation… Without initiatives to rebalance power dynamics, the prospects for democratising AI and ensuring its responsible use remain elusive, especially within the biased criminal justice system… 

This article discusses the impact of generative AI in forensic psychiatry and criminal justice, analysing current and prospective applications… This comparative exploration reveals the convergence of both past and emerging challenges… generative AI not only holds the potential to revolutionise traditional discriminative tasks… but also to open avenues to previously overlooked applications…

…Within forensic psychiatry, some of the most concerning aspects include the spread of misinformation and the reinforcement of discriminatory and criminalising narratives and stereotypes. This unfolds as a result of the increasing overreliance on AI-generated outputs used by judges, legal experts, and mental health practitioners in their decision-making processes. The situation becomes particularly problematic if biased outputs are employed for training and educational purposes, as they could have a negative impact on the perspectives and knowledge of future forensic mental health professionals. In fact, large generative AI models carry the potential to strengthen the negative association between mental health and criminal history; as a consequence, there will be an increased risk of criminalisation of forensic psychiatry patients, especially those belonging to historically oppressed groups, alongside with enhanced profiling, mass surveillance and unfair allocation of resources and treatment assignments” (p. 10- 11).

“...Institutions need to provide society with the necessary tools to investigate and hold those systems accountable [by] maintaining an ongoing dialogue with affected communities, who often lack representation in these discussions, and involving them in the process… algorithms and their decision-making are a reflection of society, we need to work on shifting from a surveillance-based approach to one focused on tackling the root causes of criminalisation and inequality, emphasising the safeguard of mental health and rehabilitation over criminalisation and profiling” (p. 11).

Translating Research into Practice

“Continuous discussions and collaborations among stakeholders, including forensic psychiatrists, AI developers, legal experts, and ethicists, are essential to navigating these complex issues, while considering the diversity in forensic psychiatry practices shaped by differences in healthcare and legal systems among different countries” (p. 11).

Other Interesting Tidbits for Researchers and Clinicians

“In forensic psychiatry, where access to sensitive data, such as medical, criminal and psychiatric records, is bound to strict legal and ethical regulations, obtaining and using these data without adequate data protection measures violates privacy laws and ethical principles. Consequently, the use of generative AI models in such environments calls for robust regulation to ensure the confidentiality and security of patients’ information, including guidelines for data anonymisation and retention and strategies to prevent data misuse and unauthorised access by external parties…” (p. 8).

Additional Resources/Programs

As always, please join the discussion below if you have thoughts or comments to add!