Skip to content

Emerging Forensic Implications of the Artificial Intelligence Revolution

Emerging Forensic Implications of the Artificial Intelligence Revolution

Featured Article

The Journal of the American Academy of Psychiatry and the Law | Volume 51, Number 4, 2023, 475- 479

Article Title

Emerging Forensic Implications of the Artificial Intelligence Revolution

Authors

Declan J. Grabb and Cara Angelotta, MD

Keywords

AI; forensic; machine learning; risk

Summary of Research

Artificial intelligence (AI), particularly in large language models (LLMs) like ChatGPT and Bard, is rapidly transforming the landscape of forensic psychiatry. An article in the Journal of the American Academy of Psychiatry and the Law discusses AI's integration into clinical and forensic practices, emphasizing its applications in electronic medical records, chart review, and legal document analysis. The lack of case law and legal precedent for AI in medicine poses challenges, raising concerns about liability and potential patient harm. The technology's specifics, such as "temperature" (variability in responses), lack of external validity, and bias reinforcement, are critical considerations. Despite limited current litigation, instances of AI-generated misinformation and their impact on mental health have surfaced, underscoring the need for ongoing discussions and vigilance within the field.

Translating Research into Practice

Temperature and Variability: In the context of LLMs, "temperature" refers to the degree of randomness or variability in the model's response. A higher temperature introduces more variability, potentially resulting in different outputs to the same query. Forensic mental health professionals must consider how this variability might affect the interpretation of AI-generated information, especially in legal and clinical contexts.

Lack of External Validity: The absence of citations and external validation in AI-generated responses poses a challenge, mainly when used in clinical care or forensic assessments. Professionals should be cautious about relying solely on AI-generated information without external verification.

Reinforcement of Bias: Concerns about AI reinforcing biases and discrimination highlight the importance of scrutinizing the datasets used to train these models. Forensic mental health professionals should be aware of potential biases in AI systems and assess their impact on decision-making in legal and healthcare settings.

Patient Communication: With patients increasingly using LLMs for therapy and diagnosis, mental health professionals need to inquire about patients' use of such technology. Educating patients on the limitations and pitfalls of AI tools becomes essential to avoid potential adverse outcomes.

Ongoing Education: Through continuing education, forensic mental health professionals should stay informed about AI technologies. Incorporating discussions on AI into professional development, including CME, can empower clinicians to navigate the evolving landscape and contribute to informed discussions about technology's role in mental health and forensic practice.

Other Interesting Tidbits for Researchers and Clinicians

Researchers in the field of AI in forensic mental health have an array of avenues to explore, each offering valuable insights. The investigation into the "temperature" concept in LLMs serves as a starting point, focusing on its influence on response variability. This research could extend to determining "optimal" temperature settings, ensuring outcomes remain reliable and consistent. Delving into legal precedent and liability frameworks provides another critical lens, analyzing existing cases and court approaches to issues related to AI-generated information and potential harm. Challenges tied to the external validity of LLMs, particularly in the absence of citations, present a unique area for exploration. Efforts could be directed at developing methods to bolster the credibility of AI-generated information, emphasizing citation practices. Investigating inherent bias and discrimination in AI models deployed in forensic mental health is essential, focusing on identifying and mitigating biases to promote fair outcomes in legal and healthcare domains. Lastly, the effectiveness of educational initiatives for mental health professionals regarding AI technologies is crucial, as is assessing the impact of continuous learning programs in preparing professionals to engage with AI responsibly and effectively.