Trends-US

Personal Perspective: LLMs cannot manage complex mental health issues.

Highly credentialed mental health providers often hear from skeptics that clients would be just as well off engaging in conversations with their hairdresser, bartender, clergy, friends, family, or Uber driver. We need to acknowledge that most of the time, there is a kernel of truth to this. “Most of the time” is not nearly enough. Not even close. The expertise lies in being able to address the minority of presenting problems that require extraordinarily high levels of training. Even more important is the expert judgment required to assess and determine what is routine and simple, and what requires intensive training and proficiency. Now we have large language models (LLMs; e.g., ChatGPT) serving as therapists. Conceptually, LLMs can be effective adjuncts to psychologists. But in reality, this is profoundly problematic.

Nearly all professions with nascent human-replacing technologies are like this. Most of family medicine involves the advice to lose weight, get more sleep, watch symptoms carefully, cut alcohol use, and other chestnuts that any person or website can do. This does not mean that primary care physicians can be replaced with LLMs or other technology. The expertise lies in assessment, decision-making, and knowing when to act on the relatively few cases and situations that require immediate follow-up, prevention plans, or referral (Gomez-Cabello et al., 2024; Riedl et al., 2024). Engineers, airplane pilots, air traffic controllers, screenwriters, architects, archivists, and many others are in a similar scenario. We are even seeing self-driving cars being remarkably effective. But about .01 percent of trips involve errors. Thus, there is a driving incident (sometimes minor and sometimes major) once every 1000 unscripted trips.

This may be impressive, but it is not acceptable. It is not even clear that it is possible to bridge the final .01 percent for a cognitively simple task such as driving. Plainly, it will be far more difficult to replace the more cognitively complex tasks performed by professionals providing care to complex human beings. But it is what experts can provide in those few complex cases that cannot be replaced and is unlikely to ever be replaced by LLMs (van Rooij et al., 2024).

Replacing professionals with LLMs ignores the importance of the relatively rare situations requiring complex professional interventions. Quantifying what percentage of psychological therapy effectiveness can be addressed by an LLM is difficult to know. Simple solutions and technology can likely get to relatively high levels of effectiveness for psychologists, but far short of 100%. The gap between current effectiveness and 100% is what psychologists are all trained for. Because the issues of most clients at any given time are addressed by this current level of effectiveness, clients may not perceive that the psychologist is constantly assessing and steering sessions to prevent more problematic issues and is fully prepared to act if there is a problem requiring more intensive interventions. Clients only see simple discussions and plans. Even experienced clients may think they can save money and time by obtaining an LLM “therapist” (Riedl et al., 2024). This mindset is harmful. Even in cases that present as simple, the psychologist is engaging in continuous complex assessment and prevention of rare and difficult issues that LLM cannot now address effectively.

By ignoring the shortfall to 100% effectiveness, we are doing irreparable harm to large numbers of clients. We are seeing increases in suicidal ideation, problematic symptoms, and even suspicion of psychosis associated with an LLM therapist (e.g., Regehr et al., 2022). That future LLMs will improve and fill this final gap and ultimately replace psychologists is against all knowledge in this area. The gap between what can be addressed by LLMs and full effectiveness is what professional psychologists are prepared to address.

The effort taken for an LLM to bridge this gap will require more data for learning than the effort of getting from 0 to where it is now (i.e.,

LLMs are not the same as actuarial approaches to assessment and treatment. There is evidence that actuarial approaches are often more accurate than professional judgment and decision making of even the most experienced psychologists (Simchon and Giliead, 2024). Weighted actuarial models can be used effectively. Yet, LLMs are very different. LLMs are designed to use powerful neural networks that use massive data drawn from the Internet to generate natural language that appears to make sense based on the data culled. Accuracy is not the primary goal of LLMs. Appearance of accuracy is a more accurate intent (van Rooij and Guest, 2025). As mentioned, this is fine conceptually, but it is nothing like a well-established actuarial model that is entirely focused on accuracy.

Artificial intelligence and the notion of LLM therapists can lead to improvements and positive reflections for human psychologists. Training can be moved toward an increased focus on identification, treatment, and prevention of those rare and difficult percent of clients and problems not addressed by LLM therapists. Psychologists must be well-trained in the relatively simple counseling tasks of basic support, watchful waiting, preventive skill development, and building client abilities; but the driving forces are to prevent, identify, and treat uncommon and complex situations and mental health issues. LLMs (or hairdressers and clergy) will never be able to replace professional psychologists who are well-trained and experienced in addressing the most difficult and complex percent of issues as a touchstone for the implementation of all forms of psychological services.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button