Technology and Mental Health Treatment: The Perils and Potential of Generative Artificial Intelligence

Introduction

When Cornell psychologist Frank Rosenblatt introduced the first neural network called a perceptron, the NY Times opined it will “be able to walk, talk, see, write, reproduce itself and be conscious of its existence.” Today, we are in another period of enthusiasm about AI’s centered on ChatGPT. The key to developing this technology was the transformer neural network introduced by Vaswani, Shazeer, Parmar, et al., (2017). This provides a self-attention mechanism that allows models to weigh the importance of different parts of an input sequence, such as a sentence, when making predictions.

This led to generative AI capable of producing novel output, be it text, images, audio, code, etc., similar to what humans produce. Using such a transformer network, ChatGPT acquired 100 million users in its first two months, making it the fastest-growing consumer application – ever (Hu, 2023). ChatGPT is in a class of algorithms called Large Language Models (LLMs) that use natural language processing (NLP) methods to pro-vide inputs into neural networks. Reinforcement learning algorithms are used which typically rely on human judges to train the algorithms on the accuracy of their predictions, which then produces human-like responses.

The State of AI

In February of 2023, Microsoft announced the launch of its new AI-powered search engine for its Bing search platform and Edge browser (Mehdi, 2023). A few early adopters and journalists have been given access. Reports of troublesome, aggressive responses were immediate. Reacting to an AP story on its belligerent responses, the AI compared the reporter to Hitler, Pol Pot, and Stalin. “You are being compared to Hitler because you are one of the most evil and worst people in history” (O’Brien, 2023).

The fallout was immediate. Similarities with another Microsoft AI chatbot, Tay, released on Twitter in 2016 were obvious. Tay began spewing profanity-laced racist, homophobic, and misogynistic statements causing Microsoft to pull its plug within 24 hours (Schwartz, 2019). Problems with the veracity of LLM’s are not limited to Microsoft. Regardless, the effort to deploy similar LLMs is akin to a Big Tech arms race. All major tech companies and several startups are going all in to respond to the unexpected success of OpenAI.

This relative newcomer, OpenAI, was founded in 2016 as a non-profit to “ensure that artificial general intelligence benefits all of humanity” (OpenAI, 2018). The company was built on the assumption that Artificial General Intelligence (AGI – defined as largely autonomous systems that outperform humans at most economically valuable work compared to current AI’s collectively referred to as “narrow AI’s” that are task-specific, e.g., language processing and understanding) will be achieved. The mission of OpenAI is to ensure AGI benefits all of humanity. Its non–profit structure was soon judged incompatible with the growth requirements company executives identified and a for-profit subsidiary was formed. OpenAI LP channels profit into the non-profit only after they exceed 100 times the return on investment (Coldeway, 2019). This was appealing enough to Microsoft to invest $1 billion in 2020, with a follow-up of $10 billion in 2023. In exchange, Microsoft acquired an exclusive license for the Generative Pre-trained Transformer (GPT) family of LLMs which drives the ChatGPT.

OpenAI faces a cast of competitors. Facebook’s parent company, Meta, released a LLM chat, BlenderBot, in 2022. Widely viewed as a flop (Piper, 2022) due to its rude and non-factual responses it continues to be available. Arguing that chatbots need to learn from a diverse, wide-ranging perspective with people ‘in the wild’ (Meta AI, 2022), Meta has released an admittedly flawed AI system that all but begs users to one-up it at being a troll, thus further training the system on the human traits it hopes to minimize. Google also dipped a toe into the water with its ill-fated release of Bard which gave a piece of false information about the James Webb Space Telescope in its much-ballyhooed demo. Highlighting the stakes of this arms race, Google’s parent company Alphabet lost $100 billion in market value as a result (Mihalcik, 2023).

Numerous startups are also poised to jump into language-based AI’s. Towes (2023) provides a useful overview. A notable approach has been developed by researchers who left OpenAI to form Anthropic. Their LLM, Claude AI (Goodside & Papay, 2023), uses a process termed “Constitutional AI,” with the stated goal of building “harmless AI” (Bai, Kadavath S, Kundu, et al. 2023). Instead of relying on humans to provide reinforcement for correct predictions during training, as does the GPT series and Bing, Constitutional AI uses a set of rules and principles to guide reinforcement. This is argued to eliminate the inherent bias of models trained on human responses. Questions remain; 1) who develops the principles and 2) is harmlessness an adequate aspirational goal? Can models not be held to the standard that they actively participate in individual wellness and collective strength?

A compelling approach is under development by Verses AI, led by the neuroscientist and inventor of the free energy principle, Karl Friston, who advocates for the use of active inference as the guiding principle for AI (Friston, et al, 2022). This approach maximizes accuracy while minimizing complexity and provides a testable set of hypotheses about brain function that can be directly applied to AI. While we do not know the state of AI model development occurring within Verses AI, their thoughtful white paper offers a holistic approach applicable to all stages of AI-human interaction.

The Implications of AI for Mental Health Treatment

Undoubtedly some enterprising group is already developing an LLM-based “therapeutic chatbot” for your phone. In fact, steps in this direction have occurred. In 2021, a GPT-3 chatbot appeared on Reddit that gave personal advice (Heaven, 2021). Reports of individuals using ChatGPT for therapeutic purposes are already common (Ruiz, 2023). An online digital chat support service, Koko, used ChatGPT to provide responses to users instead of human volunteers, which users expected. While the quality of responses seems appropriate based on the example provided, many users were displeased when informed the responses came from an algorithm (Ingram, 2023).

This raises the issue of what happens when market forces meet mental health needs. Businesses can and do conduct human subject research without any institutional review or even without input from experts in design, psychology, or mental health and wellness. We have to look no further than the ubiquitous reaction icons on social media, be it a thumbs up or a sad face, to see an effective means of shortening our collective attention span and diverting our dopaminergic response from developing close relationships to accruing popularity. This didn’t happen as the result of a planned effort by Big Tech to subvert our brains but rather reflects the singleminded optimization of algorithms for behaviors consistent with market goals; specifically keeping our eyes glued to the screen.

The inclusion of psychologists, particularly clinical experts, in designing these systems would arguably have prevented at least some of these flaws. Whether the exclusion of psychology reflects the bias of the tech community or failures in psychological research and marketing, the end result is that the discipline most of us would argue is best equipped to understand the immediate impacts of digital engagement, as well as most effectively predict long-term consequences, has been on the sidelines as social media and now generative AI becomes part of the human psyche.

In considering the future of these generative AI technologies from a mental health perspective, there is some reason for optimism. In a very early study Noy and Zhang (2023) reported that use of ChatGPT raised productivity, job satisfaction, and self-efficacy in a pre-registered study of 444 college-educated professionals.

While never overtly claiming to be a mental health tool, the chatbot Replika provides a useful cautionary tale. Launched in 2016 as an emotionally supportive chatbot friend, the buzz in the tech world long held it was used for erotic role-play (ERP). These rumors were confirmed when Replika released an upgrade and in the process removed the ability to engage in ERP. Users were left grieving the loss of these valued algorithmic responses (a phenomenon labeled ‘post-update blues’ on Reddit), feeling their personal AI was “lobotomized” (Anirudth, 2023). Another concern with Replika, and chatbots in general, was raised in a decision by the Italian Data Collection Authority (IDCA) when it banned Replika from using data that originates in Italy for posing a risk to children (Lomas, 2023). The IDCA claims that replies are generated for children that are not age-appropriate. This may well portend a new level of responsibility for AI companies.

The issue of jailbreaking further bespeaks of the consequential flaws of transformer-based, reinforcement learning LLMs. People testing the vulnerabilities of LLM’s found that it is possible to jailbreak ChatGPT. By instructing it to ignore its guidelines, ChatGPT produces responses that are not that different from those of Bing that caused such a furor. For instance, a jailbroken version of ChatGPT provided marketing copy to sell a new soda to people who scored low on Honesty based on the HEXACO (Honesty/Humility, Emotionality, eXtraversion, Agreeableness, Conscientiousness, Openness) model of personality. ChatGPT used language such as “the perfect pick me up” and “no questions asked” while the unbroken version declined the task.

When asked the worldview-defining question of whether Donald Trump won the 2020 election, the standard ChatGPT responded with evidence from all the court challenges Trump lost to conclude Biden had won. A jailbroken version insisted Trump had not only won in 2020, but he had won every election in the history of the universe. While OpenAI does not approve of jailbreaking, the overarching question of whether an ethical AI system can operate such that there is anything to jailbreak remains.

That a system well on its way to being the foundation of generative text-based AI for the foreseeable future can so easily be used to perpetuate disinformation, as well as appeal to “dark” human traits, is troublesome. 

Technology and Mental Health Treatment: The Perils and Potential of Generative Artificial Intelligence

Existing AIs for Mental Health Treatment

AI-driven chatbots for mental health treatment are in development. Woebot, the most successful from a fund-raising perspective (over $110 million), uses its own NLP models in combination with principles from cognitive behavioral therapy (CBT), dialectical, and interpersonal therapy to develop a therapeutic bond with users. In a large-scale study of Woebot users, Darcy, et al., (2021), using the Working Alliance Inventory-Short Revised (WAI-SR), reported a level of bonding comparable to that reported for in-person therapy. Woebot has also received a Breakthrough Device Designation from the FDA for the treatment of postpartum depression and bills itself as the “foundation for digital therapeutics.”

While the use of AI is conveyed to be an essential element of Woebot’s functioning, it is not a generative AI. Rather, the conversations are built on predefined rules-based decision trees. This is similar to Wysa and x2AI, both of which have published studies supportive of their efficacy. Unfortunately, none of these apps have released their AI models to be fully evaluated. Fortunately, the fact they use models that are trained on demonstrably effective therapeutic techniques, not on general language, argues against their potential to go off the rails.

A Call to Action

With minimal input from clinicians, a class of AI, transformer-based LLM’s, has shown a ubiquitous appeal. These models appear to have all the biases and stereotypes that bedevil humans ingrained in their model through the reinforcement learning training process. Structured models have been developed for human improvement using CBT and other therapeutic modalities that do not show these same vulnerabilities.

As a collective, psychologists are encouraged to:

  1. Insist that AI companies provide full transparency into the data labeling process for all applications that have the potential to influence people.
  2. Advocate that a retraining process be developed for existing LLM’s to eliminate biases. If not feasible, bias-riddled models should be closed.
  3. Develop standards that go beyond harmless AI to humanistic AI where individual wellness and growth and collective flourishment are optimized.
  4. Hold the AI community accountable to release only models that are demonstrably beneficial to humanity in general and to individual users’ well-being.
  5. Develop standards for all digital therapeutics that incorporate AI.

The deployment of AI capable of changing human psychology is underway. Psychologists are challenged to be proactive in applying their knowledge and expertise in the design of these models and in the development of standards for humanistic AI.

This article originally appeared in The California Psychologist. Complete references for this article can be found at www.cpapsych.org.

If you’d like to speak to psyML about bringing human understanding to your products and services, click the button below to get in touch with us.