Microsoft’s CEO on AI and limiting ‘unintended consequences’
Microsoft CEO Satya Nadella spoke with World Economic Forum Founder and Executive Chairman Klaus Schwab. Image: World Economic Forum
- Satya Nadella discussed the implications of AI with Forum Founder and Executive Chairman Klaus Schwab at the 2024 Annual Meeting in Davos.
- The CEO advocated a proactive approach to managing the technology’s potential downsides.
- Microsoft became a major force in the proliferation of AI through its investment in OpenAI.
Artificial intelligence has awed a lot of people in the past year or so. It’s also frightened many of them.
Microsoft has done as much as or more than any company to force the issue, by helping to get the technology in front of a broader audience than ever before. Now, its chief executive says there’s no sense in failing to consider the potentially negative repercussions.
“We have to take the unintended consequences of any new technology along with all the benefits, and think about them simultaneously,” Satya Nadella said during a Davos appearance this week, “as opposed to waiting for the unintended consequences to show up and then address them.”
Being proactive about managing risk is simply the right thing to do, the CEO said.
It was also proactive on Microsoft’s part to invest an initial $1 billion in what was then a little-known AI research lab called OpenAI in 2019 – granting it access to OpenAI’s technology, and an ability to shape and spread it that’s endured despite recent internal tumult at the maker of ChatGPT.
“Regulation that allows us to ensure that the broad societal benefits are amplified, and the unintended consequences are dampened, is going to be the way forward,” Nadella said.
The still-private OpenAI may be valued at $100 billion with its next round of funding, a figure on par with more mature household names like Starbucks or Citigroup. As users increasingly log in to prompt OpenAI's technology do everything from meal planning to software coding, that technology only becomes more capable.
OpenAI and Microsoft are of course not the only companies starting to commercialize AI for a massive, global user base.
Thankfully, according to Nadella, the conversations he’s been having with other experts indicate there’s a “broad consensus” about how best to contain its downsides, while reaping potential benefits.
The CEO and others suggest careful internal scrutiny when it comes to developing the innards of a “large foundation model” like ChatGPT, for example, then applying external regulations to specific applications of that technology – a new medical device, for example.
If AI is allowed to flourish, Nadella sees more profound use cases on the horizon. “If you can fundamentally accelerate science,” he said, that could mean new cures for diseases, and new ways to help transition away from fossil fuels.
Still, wariness has grown alongside excitement. “Adverse outcomes” from AI rank among the top hazards in the Forum’s recently published Global Risks Report, alongside things like extreme weather events and armed conflict.
Nadella called for taking the long view on AI’s development. “The biggest lesson of history is… not to be so much in awe of some technology that we sort of feel that we cannot control it, we cannot use it for the betterment of our people.”
Those prospects for betterment have not come without tangible current benefits for shareholders; last week, at least briefly, Microsoft eclipsed Apple to become the most valuable company in the world.
AI’s commercial impact could translate far more broadly, Nadella said, at a time when, on an inflation-adjusted basis, there isn’t much economic growth in the developed world to speak of: “In a world like that, we may need a new input.”
But Nadella is also a proponent of stakeholder capitalism – a way of accounting for not just investors, but also the natural environment and the social fabric.
“Our investors should care about multiple stakeholders,” he said, “because that's the only way they can get long-term returns.”
Proceeding judiciously with AI could be one way to help look after those stakeholders. “I don't think the world will put up anymore with any of us (in the tech industry) coming up with something that has not thought through safety, trust, equity,” Nadella said. “These are big issues.”
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Artificial Intelligence
Related topics:
The Agenda Weekly
A weekly update of the most important issues driving the global agenda
You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.
More on Emerging TechnologiesSee all
Matt Price and Anna Schilling
November 20, 2024