The big election year: how to protect democracy in the era of AI
Misinformation and disinformation will become easier and more prolific during the 2024 elections with artificial intelligence (AI). Image: World Economic Forum/Ciaran McCrickard
- The year 2024 will see 4.2 billion people go to the polls, which, in the era of artificial intelligence (AI), misinformation and disinformation may not be the democratic exercise intended.
- The Global Risks Report 2024 named misinformation and disinformation a top risk, which could destabilize society as the legitimacy of election results may be questioned.
- The convergence of the 'post-truth era', election processes and the surge in generative AI over the last year means that tech companies, governments and media must consider how they can help protect democracies.
The year 2024 marks the biggest year for elections worldwide.
While Bangladesh and Taiwan already elected their leaders earlier this month, upcoming elections in major democracies, including the United States, India and the United Kingdom, became a focal point of discussion at Davos 2024, the recently concluded Annual Meeting in Davos, Switzerland.
As leaders discussed how some results could reflect the strengths and weaknesses of democracy, attention also turned to how artificial intelligence (AI) could help and hinder this electoral process.
Undermining democracy?
Misinformation and disinformation have ranked top among the top 10 risks identified by the Global Risks Perception Survey. The easy-to-use interfaces of large-scale AI models have already enabled an explosion in falsified information and so-called “synthetic” content – from sophisticated voice cloning to counterfeit websites – as detailed by the Global Risks Report 2024.
The report also warned that disinformation in these elections could destabilize societies by discrediting and questioning the legitimacy of governments. Proponents of AI argue that the same generative AI (GenAI) technology that could undermine democracies could also help combat nefarious forces seeking to do so.
“What is the truth and what is the ground truth in the information environment today?” In the era of deepfakes, that is the starting point of any discussion, Alexandra Reeve Givens, chief executive officer at the Centre for Democracy and Technology, said in Davos.
Other threats seen during elections will come in robocalls and targeted automated text messages with false information aimed to alter voter behaviour, made easier by AI. Election officials also face threats of phishing and doxing.
There is also a deeper issue of trust.
“We become civic individuals through nurture, not genetically,” raised Ian Bremmer, president of the Eurasia Group, in the session, 4.2 Billion People at the Ballot Box. “We become civic beings and we have institutions around us that shape us, that allow us to connect to people around us and if you look in the United States over the last 40 years, those institutions have fragmented and they have lost trust.”
“Nurture is being replaced by an algorithm,” he added, calling out the “real-time experiment that’s being run on us and democracy right now.”
The recent Edelman Trust Barometer also found that most respondents believed that when innovation is mismanaged, both technology and society leave them behind, leading to the belief the system is biased and may do more harm than good – in particular, capitalism.
Trust author and lecturer Rachel Botsman added in the same session to Bremmar that AI systems essentially distribute trust, disrupting democracy.
“Now, the thing that frightens me is that this creates a vacuum of chaos and what we might call the most untrustworthy individuals, they understand these dynamics. They understand how to take all these fragments and this chaos and create some kind of absolute truth, create some kind of false certainty that is often mistaken for power and that’s where trust goes.”
In an election context, where it’s hard to control information, there is chaotic energy, said Botsman, adding, “Therefore, voices that cut through are often clear and absolute, and they push against something rather than stand for something.”
The question, also asked by Foreign Policy’s editor-in-chief Ravi Agrawal at a session called Protecting Democracy Against Bots and Plots, therefore, is, "How can we ensure that AI and technology are forces for good rather than chaos?"
Harness tech for good or stop nefarious AI?
“The good guys have more access to AI,” insisted Matthew Prince, co-founder and chief executive officer at Cloudflare, when considering how machine learning could identify threats and their protectionist role.
He added that accessibility to the technology they offer – and presumably others like them – could at least help less-resourced nations have a stable governmental infrastructure that benefits everyone. That’s why, he said, “it’s incumbent on us to protect democracies everywhere.”
During Building Trust in Democracy, Transparency International chair François Valérian agreed that big data and artificial intelligence help civil society organizations to identify the corruption patterns and conflicts of interest that subvert democracies.
“Moving to digital, people can see your own data and if there’s a mistake, you can say, hey, there’s a mistake in it, and the government needs to correct it,” he said.
There is a bit of trust to be built around using tech for good – the UK’s scandal around a computer system falsely placing hundreds of postmasters in the frame for fraud is an example.
At the same time, it was argued that the pandemic got people attuned to digitalization with less fear around the leaking and hacking of data at every turn. However, the usual checks and balances around data are harder in AI, where not everything is visible to the user.
Regulatory discussions have moved forward significantly, with tech platforms also acknowledging how safety must be central to innovation, which is not yet set to subside.
As Satya Nadella, executive chairman and chief executive officer of Microsoft, said, “I don’t think the world will put up any more with any of us coming up with something that has not thought through safety, trust, equity.”
Getting regulation right
“The biggest lesson of history is that not to be so much in awe of some technology that we feel that we cannot control it, we cannot use it for the betterment of our people.
“So in that context, we need our politicians to lean in,” added Nadella, at the Davos session, A Conversation with Satya Nadella, recognizing that lack of understanding of technology has prevented regulatory attempts.
Elsewhere, UK chancellor of the exchequer Jeremy Hunt advocated a “light touch” approach from the government so as not to “kill the golden goose before it has a chance to grow” – that they don’t fully know what it’s capable of, so a cautious approach is needed. There was also an acknowledgement that outputs produced by AI are a moving target, making regulation difficult as you need to tell the system that any one hypothetical output has to be outlawed.
Sam Altman from OpenAI went further and contended with who makes decisions.
“I think it is on us to figure out a way to get the input from society about how we’re going to make these decisions,” he reasoned. “Not only about what the values of the system are but what the safety thresholds are and what kind of global coordination we need to ensure that stuff that happens in one country does not super negatively impact another.”
How can we ensure that AI and technology are forces for good rather than chaos?
”To unite industry leaders, governments, academic institutions and civil society organizations around the development of transparent and inclusive AI systems, the World Economic Forum launched the AI Governance Alliance in June 2023. The initiative aims to shape the future of AI governance, foster innovation and ensure that the potential of AI is harnessed for the betterment of society while upholding ethical considerations and inclusivity.
Holding misinformers to account
What regulation and other guardrails for the misuse of AI can’t do is control the misinformation from politicians, as political scientist Rasmus Nielsen pointed out in a recent FT oped.
The obvious examples of this emerged during the previous US administration but in the UK, non-profit FullFact reported that as many as 50 members of parliament made false statements without correcting them.
The consequences of such practices have been millions of Americans believing there was a stolen election in 2022 and the question is how politicians could use AI to mislead in the 2024 elections.
The basic functioning of the media is fundamental to determining truth – a message that came up across sessions discussing the misinformation and disinformation conundrum.
Therefore, upholding media freedoms and enabling effective fact-checking – which potential AI products could also help. That involves legislation but also commitments from big tech to uphold codes of practice on disinformation, suggested Vera Jourová, Vice-President for Values and Transparency at the European Commission.
Jourová also believed “awareness raising – lowering of the absorption capacity in the society to believe the lies,” was an important element to combating mis- and disinformation.
The Forum's Global Coalition for Digital Safety is working to accelerate a whole-of-society approach to combating disinformation and tackling harmful content online. The initiative aims to facilitate the exchange of best practices for new online safety regulation and drive collaboration on programs to enhance digital media literacy. A recent Forum publication also looked at Principles for the Future of Responsible Media in the Era of AI, detailing action-oriented recommendations, including responsible AI development, open innovation and international collaboration.
AI, therefore, could be a force for good, a nefarious force and a potential multiplier of bad actors, as indicated by this year’s sessions. However, some cautioned against blaming tech for the current threats we see, including Alexander Soros, chair of the Board of Directors at Open Society Foundations:
“You know, I think it is a cop-out to say that technology has been a threat to humanity since the invention of the printing press, which is much more disruptive than anything AI is going to do in regards to human civilization.”
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Artificial Intelligence
Related topics:
The Agenda Weekly
A weekly update of the most important issues driving the global agenda
You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.
More on Emerging TechnologiesSee all
Matt Price and Anna Schilling
November 20, 2024