Wellbeing and Mental Health

Can AI algorithms help prevent suicide?

Suicide is the second leading cause of death among teenagers in the US, Europe and South-East Asia

P. Murali Doraiswamy
Professor of Psychiatry and Medicine, Duke University Medical Center
Kay Firth-Butterfield
Senior Research Fellow, University of Texas at Austin
This article is part of: World Economic Forum Annual Meeting

"I don’t want to live no more :( :( :(". These were the final words posted by a 14-year-old from Miami, a year before she launched a Facebook live stream and hung herself in front of her webcam. Her two-hour live stream was reportedly viewed by thousands of people, including a friend who called the police. Unfortunately, the authorities arrived too late to save her.

While this story is a particularly alarming one, it’s sadly not unique — suicide is the second leading cause of death among teenagers in the US, Europe and South-East Asia. Every day, millions of social media posts, chats, and queries on Facebook, Snapchat, Google, Siri and Alexa relate to mental health. These posts could act as a trail of breadcrumbs toward people most at risk of suicide. But is there a responsible way for technology companies to use this information to intervene? How should Google or Siri respond to someone searching for information about depression or suicide? How should Snapchat react when two teens talk secretly about cutting themselves?

These questions aren’t just hypothetical; some tech companies are already taking action. In the US, any Google search for clinical depression symptoms now launches a knowledge panel and private screening test for depression (the PHQ-9) along with verified educational and referral resources (“knowledge panel”). Google has stated that it will not link a person’s identity to their test answers but will collect anonymized data to improve the user experience. As millions of people take such online tests, a vast treasure trove of data is generated that may eventually be useful, in combination with other information on each user, to generate a digital fingerprint of depression.

A pop-up on Facebook appears when images or words that signal self-harm are detected

In response to live-streamed suicides, Facebook launched an artificial intelligence (AI) algorithm that scans people’s posts (in some countries) for images or words that may signal self-harm. If they spot such behaviour, resources are provided to the user and an internal “Empathy Team” is alerted. First responders may be notified if the first two actions don’t avert the self-harming behaviour. People cannot opt out of this Facebook initiative. In the first month after its launch, Facebook CEO Mark Zuckerberg says the algorithm has helped more than 100 people.

These initial efforts, based on our conversation with team leaders at Facebook and Google, seem sincere and well thought out. Kudos to both companies for jump-starting the conversation. But, as in any new frontier, innovation raises questions and challenges:

  • Is separate consent needed for social media companies to monitor our mental health?
  • If companies are monitoring our mental health, do their algorithms need to be regulated and studied to show efficacy?
  • Should Facebook allow a live stream of a suicide to proceed or cut it off when it’s clear what is happening?
  • Can people’s mental health data be used to target advertisements, for example, for antidepressant pills?
  • Will people in countries with weak data privacy rules be more subject to such monitoring? Already, Facebook’s AI is not available in Europe since it does not comply with the continent’s stricter privacy rules.
  • Last, but not least, is the solution to teenage mental health problems more AI-monitored social media or less?

Today, the ability of AI algorithms to analyse the moods of social media posts is still imperfect. There is wide cultural variation in how mental illnesses are expressed and — as most clinicians are aware — many people who plan suicide deny it, complicating studies. One analysis of 55 million text messages by the text-based mental health service Crisis Text Line found that people considering suicide are more likely to use words like “bridge” or “pills” than the word suicide.

Have you read?

Despite these challenges, Zuckerberg was right when he foresaw that AI could spot online suicidal behaviour faster than a friend. A combination of AI with trained counsellors to respond to risky posts would surely improve the current status quo. It is time now for a multistakeholder, public-private partnership to rapidly scale this innovation to help all of society. To succeed, such a partnership must revolve around responsible research guidelines, make algorithms transparent and ensure the results of studies are open access. This will guarantee that researchers around the world can contribute and help prevent the erosion of the public trust in all the technology companies involved.

Advances in brain science and AI have huge potential to enhance human well-being and mental health provided they are set within the right ethical and scientific framework.

Dr Murali Doraiswamy will be discussing the relationship between technology and mental health at the Annual Meeting 2018 of the World Economic Forum in the Open Forum session on “Suffering in Silence: Tackling Depression” which will take place on Friday 26 January from 9-10:30AM in Davos. This session is codesigned with students of Davos Secondary School and will be live streamed with simultaneous interpretation in German and English.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Emerging Technologies

Related topics:
Wellbeing and Mental HealthEmerging Technologies
Share:
The Big Picture
Explore and monitor how Mental Health is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Japan is leveraging digital solutions to tackle dementia

Naoko Tochibayashi and Mizuho Ota

November 7, 2024

How Japan is healing from its overwork crisis through innovation

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum