What is ‘sovereign AI’ and why is the concept so appealing (and fraught)?
Plugging in a new supercomputer in Denmark as part of that country's sovereign AI push. Image: via REUTERS
- Countries are building domestic ‘AI factories’ to put their own distinct stamp on artificial intelligence development.
- Denmark is the latest country to actively pursue sovereign AI, in a bid to boost domestic research and competitiveness.
- But pursuing divergent development paths could hinder collaboration and undermine security.
“It codifies your culture.”
Denmark unveiled its own artificial intelligence supercomputer last month, funded by the proceeds of wildly popular Danish weight-loss drugs like Ozempic. It’s now one of many sovereign AI initiatives underway, which one CEO believes can “codify” a country’s culture, history, and collective intelligence – and become “the bedrock of modern economies.”
That particular CEO, Jensen Huang, happens to run a company selling the sort of chips needed to pursue sovereign AI - that is, to construct a domestic vintage of the technology, informed by troves of homegrown data and powered by the computing infrastructure necessary to turn that data into a strategic reserve of intellect.
Denmark aims to use its new supercomputer to pursue ambitious pharmaceutical and biotech research, as well as initiatives in just about any field “where AI is a valuable tool.” Initial pilot projects include developing an AI care companion and more accurate weather forecasts.
In Italy, an “AI factory” anchored by a supercomputer was unveiled earlier this year to ensure the evolution of an Italian AI language model for government workers. In Sweden, a supercomputer was recently revamped in part to make that country more attractive to top AI researchers. The UAE has developed its own generative AI model (“Falcon”), and India earmarked $1.2 billion for an effort that includes an AI supercomputer outfitted with tens of thousands of chips.
2023 was when the world discovered generative AI, as McKinsey puts it. Already, the percentage of organizations making use of the disarmingly easy way to generate code, write boilerplate content, or kickstart research has roughly doubled.
Flinging open the doors to AI-powered large language models like ChatGPT has broad implications. Do governments necessarily want people taking their cues from chatbots developed in other places that conceivably have contrasting political systems and cultural values?
The more everyone needs to rely on a digital brain built by plunging into the depths of the Internet, the more interest there is in shaping access—and, the thinking goes, no single nation should have a monopoly on that process.
“Codifying” through sovereign AI might also be a way to address fears of losing a specific cultural identity to a globalized world; the kind of fears that feed populist political movements, sometimes with surprising success.
Mastering AI at a national level requires people willing to trust it – which now apparently describes only about one in three people in the UK, France, Australia, and South Korea, and as few as roughly one in five in Japan and Finland.
For a technology that widely off-putting, keeping a big portion of it in-house by building your own version might help allay concerns.
Avoiding ‘sovereignty traps’
If AI didn’t have such an immense potential impact, the US president probably wouldn’t issue a national security memorandum ordering an assessment of the country’s “relative competitive advantage” in the technology – as he did last month.
It’s not surprising that countries are forging expansive plans to put their own stamp on AI. But big-ticket supercomputers and other costly resources aren’t feasible everywhere.
Training a large language model has gotten a lot more expensive lately; the funds required for the necessary hardware, energy, and staff may soon top $1 billion. Meanwhile, geopolitical friction over access to the advanced chips necessary for powerful AI systems could further warp the global playing field.
Even for countries with abundant resources and access, there are “sovereignty traps” to consider. Governments pushing ahead on sovereign AI could risk undermining global cooperation meant to ensure the technology is put to use in transparent and equitable ways. That might make it a lot less safe for everyone.
One example: a place using AI systems trained on a local set of values for its security may more readily flag behaviour out of sync with those values as a threat.
One way to share AI resources across borders would be through what its proponents call a Global AI Compact. The idea is that necessary computing power is like electricity: It is essential for the modern world and shouldn’t be out of reach for anyone.
The kind of collaboration outlined by the compact could help avoid worsening disparities created in the wake of the Industrial Revolution. Nearly 150 years after the first light bulb was illuminated, about 760 million people still don’t have access to electricity.
Electricity is just one item added to a long list of things deemed vital for any society’s economic interests over the centuries. Some originated in more surprising places than others; the American social critic Lewis Mumford wrote about the importance of clocks, for example, initially put to use in monasteries before they “helped give human enterprise the regular collective beat and rhythm of the machine.” Time cards for factory workers followed.
Mumford also believed the moral and political choices made when building a tool are more important than the tool itself. Constructing a supercomputer is one thing, the reasons why are everything.
AI’s rise has coincided with a geopolitical fracturing that has people thinking a lot about how they and their cultures fit into the bigger picture. In the US, a recent presidential election laid bare profound internal division on what its people value, and the role they think the country must play in the world. So, what should its sovereign AI look and sound like?
To codify your culture, you might have to first agree on what it is.
More reading on AI and sovereignty
For more context, here are links to further reading from the World Economic Forum's Strategic Intelligence platform:
- According to this researcher, one country in particular is taking notably risk-receptive approach to ramping up its AI capabilities that emphasizes speed over flawless execution – and it seems to be working. (The Conversation)
- “In shaping its AI future, Australia must prioritize trust over territorial control.” This piece underlines some of the shortcomings of relying too much on sovereign datasets. (ASPI)
- Ex-European Central Bank President Mario Draghi recently turned heads with his sobering report on the region’s competitiveness, but according to this piece he understated one existential challenge: AI. (Project Syndicate)
- Türkiye’s AI aspirations include developing a “joint large language model with Turkic states,” according to this piece. (RUSI)
- “Who really calls the shots? The oligopolies or the state?” Economic historian Robert Skidelsky ponders the delicate balance between advancing AI and our human essence. (Institute for New Economic Thinking)
- A day in the life of the world’s fastest supercomputer. (Nature)
- Most people think that the next big progression for AI will require building supercomputers at a once unimaginable scale, according to this piece. One proposed way to do that: let chips talk directly to one another using light. (Wired)
On the Strategic Intelligence platform, you can find feeds of expert analysis related to Artificial Intelligence, Geopolitics and hundreds of additional topics. You’ll need to register to view.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Artificial Intelligence
Related topics:
The Agenda Weekly
A weekly update of the most important issues driving the global agenda
You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.