How AI could expand and improve access to mental health treatment
AI should complement human healthcare providers rather than replace them. Image: Ümit Bulut/Unsplash
- About 85% of people with mental health issues do not receive treatment, often because of provider shortages.
- Artificial intelligence (AI) could be used to improve mental health care and help alleviate healthcare provider shortages.
- To maximize generative AI's benefits, it should complement human providers rather than replace them.
Mental health is a major global public health concern. While an estimated one billion people suffered from mental health or substance abuse disorders before COVID-19, the pandemic accelerated the problem, resulting in a 25%-27% rise in depression and anxiety, according to the World Health Organization (WHO).
Meanwhile, about half of the world's population is expected to experience a mental health disorder during their lifetime, researchers at Harvard Medical School and the University of Queensland found.
What is the World Economic Forum doing about mental health?
Exacerbating the problem is a shortage of qualified professionals to treat patients. Globally, there are 13 mental health workers per 100,000 people, according to WHO’s Mental Health Atlas.
The split is especially stark between developed and underdeveloped economies. The number of mental health workers in high-income countries can be 40 times higher than in low-income countries.
The workforce shortage significantly limits access to care, notably in low- and middle-income nations and contributes to an estimated 85% of people with a mental illness not receiving treatment, a study in the International Journal of Mental Health Systems reported.
The gap in supply and demand highlights an urgent need for alternative solutions. Just as telehealth expanded access to care for many conditions, artificial intelligence (AI) has the potential to improve availability to more mental health patients, many of whom are eager to try it for therapy, according to a survey of 16,000 people in 16 countries for the Oliver Wyman Forum for its How Generative AI Is Transforming Business And Society report.
The applicability of AI tools will vary depending on the severity of a patient’s condition — both the underlying diagnosis and the symptoms they experience at a given point in time. But in many cases, AI could increase access to much-needed care. It also can be used to analyze data and help clinicians treat patients in real-time with more personalized insights and guidance.
Rise of AI-based mental health therapy
There already are a variety of ways AI is used in mental health treatment. AI can combine and use insights from several sources such as medical textbooks, research papers, electronic health record systems and clinical documentation to help mental health professionals recommend treatments and predict how patients will respond.
Meanwhile, self-diagnosis apps, chatbots, and other conversational therapy tools that leverage generative AI lower barriers of access for patients experiencing less severe episodes.
Many consumers are willing to try AI to manage their mental health. In fact, 32% of respondents to the Oliver Wyman Forum survey said they would be interested in using AI instead of a person. At the high end, 51% of respondents from India expressed a willingness to use AI-generated therapy, compared to 24% in the US and France.
Interest in receiving support from a human-like therapist to manage mental health is highest in countries with fewer mental health professionals per capita, demonstrating how AI can extend access, especially in developing markets.
Younger people are more willing to leverage the technology for this purpose — 36% of Gen Zers and millennials reported interest in using AI for mental health, compared to 28% of other generations.
This generational difference tracks with prevalence and attitudes toward mental health more generally: Generation Z is around two times more likely to struggle with mental health issues and twice as likely as non-Gen Zers to attend therapy as a result, according to another Oliver Wyman Forum report. Their proactiveness and interest in using AI could kickstart widespread development of AI therapy tools in the future.
Despite it not having actual emotions, consumers perceive generative AI to be a trusting, emotional confidant, perhaps due to its constant availability and ability to consistently replicate empathy through patterns in underlying data.
Our research shows that five times more respondents reported generative AI made them feel like they had a reliable confidant for sharing personal thoughts and seeking life advice compared to a human. Of the four out of five people who said they preferred AI in at least one scenario, around 15% said that they believe AI is more emotionally intelligent than humans.
Proceeding with caution on AI-assisted care
There are, however, risks and a thorough understanding of the technology is required before deploying AI tools to ensure care is safe and effective. Importantly, AI-assisted mental health care should be viewed as part of a multi-pronged approach, rather than a direct substitute for human-to-human interactions.
It’s important to recognize that severity of a patient’s condition fluctuates, sometimes during the same episode. Understanding the spectrum of behavioural health conditions is a must before deploying AI tools. Health systems can use that knowledge to provide more personalized care.
And digital health companies must embed functionalities that direct patients to human resources if the app detects a condition is worsening. The goal is to reduce the risk of a situation in which the patient gets and acts on bad advice, potentially leading to self-harm.
Confidentiality and security are other critical issues. Data can be, and often is, sold for marketing purposes and regulators around the globe are putting artificial intelligence firms and companies offering AI-enabled technology on notice.
The US Federal Trade Commission warned that it would pursue cases against companies that loosen data policies to enable for greater mining of customer data. Meanwhile, member states of the European Union earlier this year unanimously approved the Artificial Intelligence Act, which sets out strict new standards for regulating AI, as well as significant penalties for violators.
Beyond data mining and privacy concerns, security is a high priority for all healthcare organizations. Healthcare is a top target for cyber criminals and consumers must understand the risks before downloading apps onto their devices. Awareness and recognition of these drawbacks will ensure that generative AI is used safely, protecting patients and their rights to confidential treatment.
To maximize the benefits of the technology, generative AI should complement human providers rather than replace them. Ultimately, a collaborative approach between AI and human providers will allow mental health professionals to focus even more on empathetic, personalized care, improving patient outcomes, access and efficiency.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Artificial Intelligence
Related topics:
Forum Stories newsletter
Bringing you weekly curated insights and analysis on the global issues that matter.
More on Health and Healthcare SystemsSee all
Fernando J. Gómez and Elia Tziambazis
December 20, 2024