3 critical challenges for governing digitalization
This representation of a digital subway shows how AI is becoming part of the backbone of our economies and everyday lives. Image: Siemens.
- We must challenge the irresponsible use of technology undermining our societies and democracies.
- As we move into a new digital industrial age, we have an opportunity to change course and usher in a more transparent, democratic era of technology
- Global technology governance will not be achieved by political actors alone – it also requires a grassroots movement in the tech community.
The mob attack on the US Capitol on 6 January 2021, was a turning point – not just in political history, but also in technology governance.
Long before the attack, social media platforms and their algorithms had played a central role in amplifying conspiracy theories, false information, hate speech, and demagoguery. With the events in Washington, D.C., however, it became abundantly clear that the way we steer, harness, and regulate modern technology profoundly impacts our personal and economic lives. Tech governance is an existential question for our societies and democracies. It’s a matter of national and global security as well as social peace and progress.
There are three things, in particular, that we must do: scrutinize business models and algorithms that amplify extreme points of view for the sake of generating engagement and revenue; ensure digital platforms are open and foster innovation across the economy; and determine how much control we’re willing to cede to autonomous systems and artificial intelligence (AI) – and that humans, not machines, ultimately bear responsibility and can be held accountable.
1. Scrutinize business models that undermine human values and polarize societies
When we talk about modern technology and digitalization, it’s useful to distinguish between two big waves. The first big wave started in the 1990s. It brought us smartphones and tablets, always-on internet, effortless online shopping, and entertainment. Never before had so many consumers had such ease of access to so much information. Never before had so many humans been able to tap into global opportunities and make their voices heard in the way these technologies allow.
But the hallmarks of this first wave of digitalization have all too often been driven by a problematic business model. Let’s take email, cloud storage, or instant messaging, to name just three examples. Because direct monetary fees aren’t associated with such digital services, consumers are given the impression that these services are “free”. But they’re not. Instead of paying with money, users pay with another valuable resource: their personal data. And in many cases users are unaware that that’s what they’re doing. In the offline world, we’d be outraged if the mailman opened and read our letters before dropping them in our mailboxes. In the online world we seem to have forgotten such fundamental human values.
We need to challenge business models that are seemingly “free”, and ensure algorithms are designed around fundamental human values such as privacy.
”On social media platforms, the problem of “free” services grew even bigger: the algorithms determining what we see on social media have been programmed to foster engagement – to promote “likes”, “shares”, “comments” and so on. Why? Because engagement equals traffic, equals data, equals revenue and – most often – big profits.
It turns out that one of the best ways to generate engagement is to amplify extreme voices, such as those found on the fringes of the political spectrum. There is more and more evidence that outrageous, strong, and false claims spread especially fast on these networks because the algorithms are designed to do exactly that.
Admittedly, there were extreme voices long before the rise of social media. But it’s not their existence that’s the problem. The issue is that social media algorithms are massively promoting them, allowing increasingly extreme positions to take centre stage in modern public discourse.
The inception error of “free” services has had extreme consequences for society. Correcting this error is perhaps the foremost question of tech governance today. We need to challenge business models that are seemingly “free”, and ensure algorithms are designed around fundamental human values such as privacy.
To return to the earlier metaphor: the reason I can trust the mailman not to open my letters is that I pay for the stamp. His existence doesn’t depend on learning and selling my personal information. It’s up to us to draw the right conclusions from this simple truth.
2. Create open platforms
The second wave of digitalization and technology innovation hit in the early 2010s and greatly accelerated during the COVID-19 pandemic. This second wave is about the digitalization of the industrial world – about digital manufacturing, data-driven healthcare, intelligent buildings, smart cities, and so on. The stakes are even higher than in the first wave because industrial digitalization affects the very infrastructures that underpin modern life.
This time around, “free” services in exchange for personal data are not so much the issue. Industrial systems, after all, are not so easily persuaded to give away their data. However, there are other key challenges – platform governance is one of them.
Digital platforms are some of the most powerful tools humans have ever built, and they excel at driving down the cost of new technologies. They make the benefits of digitalization available to the masses and on a global scale.
What is the World Economic Forum doing about the Fourth Industrial Revolution?
There have been two predominant approaches to tech governance so far: a laissez-faire approach that allows private companies to hyperscale digital platforms; and one where governments control or closely monitor platforms. Ironically, both lead to a similar outcome: closed mega-platforms that grant access to their data only on their own terms and that concentrate power and wealth.
A different approach to platforms – one based on openness, interoperability, decentralization, and sharing data – could lead to much better outcomes and much more innovation. It could encourage decentralized networks of innovation that spread their economic benefits much more evenly across society.
Europe – with its strong industrial tradition and know-how in digital manufacturing – could pioneer such a new approach. Gaia X , launched by Germany and France, and supported by an increasing number of European partners, is an initiative in this direction.
Gaia X provides a blueprint for a federated exchange between networked platforms. It sets strict standards for data protection and data security, and uses existing standards for data portability. Its primary focus is to replace the “gatekeeper model” of digital platforms with a “shared-data model” – one where all stakeholders will have equal opportunities to access and exchange data and innovate on top. Such a federated exchange of platforms can create a much more level playing field between companies, and it can encourage innovation at startups and small and medium-sized companies.
3. Promote transparency in our use of AI
Another key challenge in the second wave of digitalization is using autonomous systems for humanity’s benefit – and ensuring human oversight of critical systems.
Today, AI is no longer simply powering digital assistants on our smartphones. Rather, electronic minds are being integrated into key aspects of our social and economic lives, such as healthcare, mobility, buildings, and factories. Already today, AI can ensure fewer delays and higher availability in public transportation systems. It can help us to optimize and reduce the energy use of buildings and cities. And it can analyze medical scans of our bodies and support radiologists in reaching the right diagnoses.
As Yuval Noah Harari has argued, AI may in fact supersede humans as the “smartest algorithm on the planet”. It’s a bold statement, but one we shouldn’t turn a blind eye to. Instead, we need to reach a very clear understanding of where we’re willing to use AI and how much of that control we’re willing to give up.
Intelligent algorithms for consumers have been with us for a while. They recommend movies on streaming platforms, for instance, or relevant products when we shop online. We don’t give these algorithms too much scrutiny so far – although they, too, affect our lives and societies.
However, as AI is becoming part of the very backbone of our economies and everyday lives – evaluating mortgage applications, guiding autonomous vehicles, or recommending medical decisions – we have to pay a much closer look than before. We should know precisely where, when, and how AI is being used in these contexts. The transparency of results is critical here and so is the possibility for humans to intervene and to take overall responsibility.
An appeal to the grassroots
Tech governance is a powerful tool for addressing the major challenges and consequences of the first and second waves of digitalization. However, given the breakneck pace of innovation, we won’t always be able to govern fast enough “from the top down”. We will also need governance “from the bottom up” – based on a sound human moral compass.
The Tech for Life movement, which was launched in 2018, is all about this bottom-up approach. It’s a code of honor for the tech community, if you will. It’s about putting values into practice, no matter where in the hierarchy you are – values such as purpose, justice, diversity, social responsibility, openness, and trust. The aspiration is to encourage all stakeholders to make sure that technology is used in a socially responsible way – by walking the talk wherever we work.
This cultural change, in combination with wise tech governance, can make the rise of modern technology a true success for society, for democracy, and for humanity. We have the opportunity to remember 6 January 6 2021, as the moment when things changed for the better. Let’s seize this chance.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
The New Data Economy
The Agenda Weekly
A weekly update of the most important issues driving the global agenda
You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.
More on Forum InstitutionalSee all
Gayle Markovitz and Spencer Feingold
December 2, 2024