Fourth Industrial Revolution

Eric Schmidt: AI should benefit the many, not the few

This article is published in collaboration with Quartz.

Before Skynet can become self-aware, before the robots can rise up, we need a system in place to safely pursue research into artificial intelligence. Or so argues Eric Schmidt, the chairman of Google’s parent company, and Jared Cohen, the head of its tech-minded think tank, Google Ideas.

Schmidt has long been bullish on the prospects for the technology, backing experimental projects like Alphabet’s self-driving car program and Google’s DeepMind predictive search engine, suggesting AI will revolutionize how we work and live, even going as far as tell us not to fear living in a world full of AI.
151223-AI society robots technology zdnet
But it seems even Schmidt acknowledges that a degree of caution is required in AI research, much as other tech luminaries, such as physicist Stephen Hawking and Tesla CEO Elon Musk have called for. (Musk has gone as far as to pledge $1 billion with a group of scientists and technologists, calling themselves OpenAI, to promote AI research that has a “positive human impact.”)
In an op-ed in Time magazine, Schmidt and Cohen outlined three principles they believe developers, researchers, and companies should follow when exploring AI:

“First, AI should benefit the many, not the few.”

Life-altering technology, Schmidt and Cohen argue, should benefit everyone, not just businesses. “As a society, we should make use of this potential and ensure that AI always aims for the common good,” they wrote.

AI research “should be open, responsible and socially engaged.”

BothGoogle and Facebook have recently made overtures to bring greater transparency to their AI research. Facebook recently revealed the designs for the servers it uses for AI research, while Google open-sourced the code behind its AI engine,TensorFlow. Critically, though, neither company gave away the data they use to train, test, and strengthen their AI algorithms, which could be the determining factor to their success.

“Those who design AI should establish best practices to avoid undesirable outcomes.”

Researchers need to ask themselves, while systems are still being developed, whether the data they’re using to train AI systems are right, whether there are any side-effects of their research they need to consider, and whether there are adequate failsafes in place within the system. “There should be verification systems that evaluate whether an AI system is doing what it was built to do,” Schmidt and Cohen wrote.

Artificial intelligence is quickly moving from the realm of science fiction to reality. While, thankfully, we haven’t had to worry about computer systems triggering armageddon just yet, we do have smart systems that can diagnose cancer, handle our appointments for us, and clean our floors on their own.
If scientists and deep thinkers are to be believed, once we’ve cracked AI systems that can truly think and act on their own, with their own agency, it won’t be long before they blow past us in terms of intelligence. To control this, we should be shaping the development of this intelligence to benefit humanity, rather than disrupt it.
Publication does not imply endorsement of views by the World Economic Forum.

To keep up with the Agenda subscribe to our weekly newsletter.

Author: Mike Murphy is a reporter at Quartz, covering technology.

Image: The hand of humanoid robot AILA (artificial intelligence lightweight android) operates a switchboard during a demonstration by the German research centre for artificial intelligence. REUTERS/Fabrizio Bensch.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Fourth Industrial Revolution

Share:
The Big Picture
Explore and monitor how Fourth Industrial Revolution is affecting economies, industries and global issues
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.

Subscribe today

More on Fourth Industrial Revolution
See all

What just happened in Davos, and how is the world different now?

International Standards: reflections from Davos on why collaboration is key to a sustainable, inclusive future

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2025 World Economic Forum