Featured Article
Article Title
Large language models for the mental health community: framework for translating code to care
Authors
Matteo Malgaroli, PhD - Department of Psychiatry, New York University School of Medicine, New York, NY, USA
Katharina Schultebraucks, PhD - Department of Psychiatry, New York University School of Medicine, New York, NY, USA
Keris Jan Myrick, MS - Partnerships and Innovation, Inseparable, Los Angeles, CA, USA
Alexandre Andrade Loch, MD - Laboratorio de Neurociencias (LIM 27), Instituto de Psiquiatria, Hospital das Clinicas HCFMUSP, Faculdade de Medicina, Universidade de Sao Paulo, Sao Paulo, Brazil
Laura Ospina-Pinillos, PhD - Department of Psychiatry and Mental Health, Faculty of Medicine, Pontificia Universidad Javeriana, Bogota, Colombia
Tanzeem Choudhury, PhD - Department of Information Science, Jacobs Technion–Cornell Institute, Cornell Tech, New York, NY, USA
Roman Kotov, PhD - Department of Psychiatry, Stony Brooks University, Stony Brooks, NY, USA
Munmun De Choudhury, PhD - School of Interactive Computing, College of Computing, Georgia Institute of Technology, Atlanta, GA, USA
John Torous, MD - Correspondence to: Dr John Torous, Department of Psychiatry, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA 02215, USA
Abstract
Large language models (LLMs) offer promising applications in mental health care to address gaps in treatment and research. By leveraging clinical notes and transcripts as data, LLMs could improve diagnostics, monitoring, prevention, and treatment of mental health conditions. However, several challenges persist, including technical costs, literacy gaps, risk of biases, and inequalities in data representation. In this Viewpoint, we propose a sociocultural–technical approach to address these challenges. We highlight five key areas for development: (1) building a global clinical repository to support LLMs training and testing, (2) designing ethical usage settings, (3) refining diagnostic categories, (4) integrating cultural considerations during development and deployment, and (5) promoting digital inclusivity to ensure equitable access. We emphasise the need for developing representative datasets, interpretable clinical decision support systems, and new roles such as digital navigators. Only through collaborative efforts across all stakeholders, unified by a sociocultural–technical framework, can we clinically deploy LLMs while ensuring equitable access and mitigating risks.
Keywords
LLM; mental health; treatment; research (AI; Artificial Intelligence; Digital Mental Health; Tele Mental Health)
Summary of Research
“…Digital technologies could help to meet mental health needs across populations at scale, particularly in underserved areas where the availability of mobile technology surpasses that of traditional health care… The accelerating enthusiasm for artificial intelligence and large language models (LLMs), exemplified by the widespread adoption of [ChatGPT] has broadened interest [in] mental health applications… understanding the actual capabilities of LLMs and then identifying how they can be realistically deployed for research and care is crucial…
Mental health conditions are often both identified and treated through language, making them an ideal target for LLMs. Efforts in prevention, early diagnosis, monitoring, and even treatment are all potential areas in which LLMs can augment care and research… Research findings from peer-reviewed and preprint studies suggest that LLMs could assist in clinical tasks, including diagnostic assessments, intervention delivery, and empathic support…
Despite their potential, the clinical deployment of LLMs is hindered by several challenges. Firstly, transparency issues arise from the datasets on which these models are trained, hindering multilingual performance and potentially embedding hidden biases… Furthermore, the technical cost associated with LLMs poses considerable accessibility and implementation barriers, particularly in low-resource settings. These barriers include increasing the need for computing power, dedicated hardware, and growing energy consumptions and carbon footprints, all of which risk exacerbating existing global inequalities… Addressing these challenges is crucial to ensure that LLMs can help to serve mental health needs with effectiveness, fairness, and equity” (p. 1).
“Building safe and useful LLMs will require a global and multimodal biobank of psychiatric texts, research, patient notes, clinical measures, personal outcome metrics, biomarker data, [and] behavioural signatures… LLMs are tools that will be able to affect clinical care only if placed in the hands of stakeholders that can optimally and responsibly use them… Patients could use LLMs to help them practise therapy skills, challenge negative assumptions, or even for reality testing in which users can assess the relative objectivity of some thoughts” (p. 2).
“Given that there are no well established biomarkers for any mental illness, and that even the gold standard for diagnostics, the Diagnostic and Statistical Manual of Mental Disorders-5, has variable inter-rater reliability, the challenge of training LLMs cuts directly into one of psychiatry’s more intractable challenges. Early warning signs and linguistic markers identified by LLMs indicate their potential for more meaningful clinical stratification and nosology, reflecting the continuum between wellbeing and illnesses. Despite their potential insights, the sociocultural–technical lens emphasises the need for LLMs to be developed within interpretable clinical-decision support systems… Cultural and linguistic characteristics strongly influence expressions related to mental health, posing challenges for LLMs built on English text and western values…The integration of LLMs into mental health care presents important challenges, yet the opportunities they offer in enhancing research and care delivery are substantial” (p. 3).
“Through concerted efforts in addressing these challenges, we can harness LLMs to help clinicians, researchers, and individuals with lived experiences to improve mental health outcomes globally” (p. 4).
Translating Research into Practice
“Having shared training and educational resources will also support the implementation of LLMs in low-resource settings with limited technical expertise” (p. 2).
“Designing more powerful and representative LLMs for clinical use will require increasingly massive clinical datasets, juxtaposing the benefits of large-scale training with privacy considerations. A promising solution is federated learning, which allows LLMs to learn and be aligned from decentralised datasets without direct sharing, safeguarding data privacy and security” (p. 2- 3).
“Transparency would help to ensure the monitoring of where data are being sourced from, [and] open debate on how to track diversity of data… Promoting digital inclusivity across different linguistic and cultural contexts will require directed efforts, including the support of new roles… community members with special training in digital equity, digital health, and digital engagement [can] help to ensure all people can use new models of care delivery…
Accountability should be enshrined in policy, establishing the differential responsibility of developing and deploying organisations in implementing safeguards, addressing adverse outcomes, and evaluating alignment with public health goals” (p. 3- 4).
Other Interesting Tidbits for Researchers and Clinicians
“Although a new generation of LLM-powered novel diagnostics will not transform care overnight, approaching them as adjunct tools within the broader context of patient care will help ensure that LLMs are effectively integrated into health-care practices…
Model training often relies on a non-transparent selection English [that] potentially contain hidden biases, also making the estimation and comparison of clinical performance challenging…
A second step is to design LLMs that can flexibly align between and within cultural contexts, because symptoms considered pathological in one setting might be seen as normal if not valued in another… LLMs rely on vast training datasets and biases existing in those datasets will be amplified if attention is not paid to inclusivity at all stages of development and implementation…
Governmental policies will greatly shape how LLMs are accessed across different regions and their ethical governance” (p. 3).
Additional Resources/Programs
As always, please join the discussion below if you have thoughts or comments to add about AI, Artificial Intelligence, Digital Mental Health, or Tele Mental Health!