Global Risks

Protection vs. censorship: how to strike the right balance curbing online disinformation

Panellists discuss online disinformation at the Truth vs. Myth in Elections session at this year's Annual Meeting in Davos.

Image: World Economic Forum/Faruk Pinjo

  • Panellists at two sessions at this year's Annual Meeting discussed the balance that must be struck in addressing online disinformation.
  • The complexities – stemming disinformation while preserving free expression – makes achieving consensus challenging.
  • Five priorities emerged in terms of how to proactively move forward.

The days when conspiracy theories, disinformation and fake news were confined to the internet’s fringes are long gone. They are now part of mainstream global debate, influencing elections, disrupting markets and challenging the boundaries of free speech. As the World Economic Forum’s Global Risks Report 2025 revealed, disinformation remains the top short-term risk for the second consecutive year. It has the power to erode trust, deepen divisions, and weaken governance and global cooperation. With regulation still lagging behind the fast pace of technological change, the risk is not only persistent but growing.

At the World Economic Forum’s Annual Meeting this year, two key sessions – Truth vs. Myth in Elections and To Moderate or Not to Moderate? – explored the growing challenges of disinformation, the dilemmas of content moderation, and the fragile balance between democratic integrity, human rights and technological innovation. While panellists agreed that the current trajectory is unsustainable, the question is whether consensus can be forged amid the complexities faced by digital platforms: balancing their responsibility to curb harmful content, without undermining free expression or further polarizing societies.

Disinformation: a threat to democracy

In 2024, 50% of the global population went to the polls, with false narratives and misleading information shaping political landscapes worldwide, deepening geopolitical fragmentation.

The case of Moldova is illustrative. Its Prime Minister, Dorin Recean, described how foreign disinformation networks interfered in his country’s elections by disseminating AI-generated deepfakes showing children in military uniforms beside EU flags, falsely claiming that EU integration would lead to war. Disinformation is not just about lies. It is about fear, emotions and division. “They are trying to associate EU with war and say, ‘Look what happens in Ukraine. If you want the EU, the same will happen in Moldova,’” Recean warned, adding that 2.5% of Moldova’s GDP was invested in disinformation attacks.

A structured approach to online content governance. Image: WEF

Governance and moderation: offline rules, online realities

As governments begin to understand the implications of disinformation’s impact, the European Commission’s Digital Services Act (DSA) is emerging as the benchmark in discussions on accountability. “What is illegal offline must be illegal online,” declared Jean-Noël Barrot, France’s Minister for Europe. “Transparency in algorithms is non-negotiable,” he argued. “In France, racist or homophobic remarks are against the law and can be judged by a court, so platforms should follow the same rules online. When it comes to misinformation, the DSA requires platforms to take steps to reduce the systemic risks that goes with disinformation. This is becoming increasingly essential, especially given how younger generations engage with information.”

Yet Sasha Havlicek of the Institute for Strategic Dialogue cautioned against oversimplification and warned that bad actors exploit openness on digital platforms, using deceptive tactics like botnets and fake accounts to distort public discourse in ways invisible to the average user. “The algorithmic infrastructure amplifies whatever is most likely to engage you. The business model of the attention economy disproportionately amplifies the most sensationalist, the most extreme rather than the moderate,” she said.

Moderating billions of users is not an easy task. Content moderation remains technically complex and politically contentious. In 2024 alone, the top platforms moderated 1.6 billion pieces of content, according to Michael McGrath, a European Commissioner for Democracy, Justice and the Rule of Law. “Freedom of expression must be protected, and there is no question about that. It is non-negotiable. But there are guardrails to strike the right balance,” he added.

Moderation at scale comes with pitfalls. Tirana Hassan, former Executive Director of Human Rights Watch, stressed that moderation policies must be grounded in human rights principles to prevent misuse. “Moderation should protect, not silence. Over-regulation risks enabling authoritarian control,” she said. “Investing in good content moderation, both digital and human, can mean that you are meeting legal obligations. It is not in conflict,” she said. Tirana also stressed the need to protect vulnerable groups online, noting that in some cases they were baited and outed on platforms, risking physical harm and even displacement.

Volker Türk, the United Nations High Commissioner for Human Rights, cautioned that “a lot of these products end up in situations of crisis or conflict, where there is a heightened responsibility to ensure they are not used to dehumanize or fuel more violence”.

Accept our marketing cookies to access this content.

These cookies are currently disabled in your browser.

AI: amplifier or solution?

Meanwhile, tech firms face a paradox. Generative AI has amplified the spread of false and misleading content, but also holds promise for identifying and filtering it. Some of these challenges stem from how platforms are structured and algorithms that often amplify content in ways with unexpected societal consequences.

According to Havlicek, engagement-first algorithms promote controversy over content. “Because of the attention economy, it is not a free speech environment. It is a curated one, where tech giants ultimately decide what you see based on the data they collect and the money they are going to make,” she said.

And it is not just institutions at fault. According to the 2025 Edelman Trust Barometer, globally, 40% say they approve of hostile activism, including attacking people online or intentionally spreading disinformation as a legitimate tool to drive change. This sentiment is especially predominant among respondents aged 18-34, with 53% of them saying they would approve intentionally spreading disinformation for a cause they support. This alarming trend reflects a deepening crisis of trust. Individuals are now embracing tactics once considered unethical.

Five priorities for what comes next

These two sessions in Davos provided not just an analysis of the impact of disinformation and how to moderate social media content, but also offered direction. Here are some key priorities that emerged on how to address disinformation challenges:

  • Transparency as a tool: Platforms should allow scrutiny of how content is promoted. “Where authoritarian regimes respond with censorship, we must respond through sharp transparency. That means getting data access for independent research, so we can evaluate the impact of these systems on public discourse,” said Havlicek.
  • Media literacy: Education and critical thinking are the best inoculations against disinformation. “Educating young people to distinguish what is reliable from what is not is helpful, and necessary. But it must be done without bias. It is about teaching people how to think for themselves,” said Almar Latour, CEO of The Wall Street Journal.
  • Multistakeholder governance: Governments, tech and civil society should co-create rules that protect both free speech and safety. Moderation should protect, not suppress. But without transparency, we risk trading disinformation for another kind of distortion: one where only a few decide what is visible. “We need open public spaces in which we can have discussions based on facts, based on evidence and based on science to find the policy solutions that the world so desperately needs,” added Türk.
  • Ethical tech design: Platforms should prioritize safety, as nearly 50% of children encounter harmful content accidentally. “We are building a positive corner of the internet, private only for under-16 year old users, and that was really important to us because we wanted to ensure that our most vulnerable users were safe,” said Wanji Walcott, Pinterest’s Chief Legal and Business Affairs. Iain Drennan of WeProtect Global Alliance echoed the urgency for safeguarding minors, calling for “safety by design for well-designed regulations”.
  • Balanced content moderation: Effective moderation requires a nuanced approach that encourages free expression while preventing harm. “Freedom of expression is not absolute. There are certain restrictions that are required to protect the most vulnerable. Moderation and the responsibility of the digital platforms is also important to ensure that the business model, the policies, the technology are in line with international human rights,” said Hassan.

Truth will not moderate itself

Journalists and civil society organizations need to play a crucial role in countering disinformation through investigative reporting, fact-checking, and public education initiatives. Latour underscored the importance of maintaining editorial independence and transparency to rebuild public trust in the media. “The suggestion that disinformation didn’t play a large role in the past year [vis à vis elections] is grotesque. That is misinformation,” he added.

Discover

What is the World Economic Forum doing about improving online safety?

In an era dominated by algorithmic amplification and synthetic content, countering disinformation demands more than just technological solutions or regulatory frameworks. It requires shared responsibility rooted in critical thinking, transparency and resilience.

Open debate is vital to democracy. Rebuilding trust is not about consensus. The fight for truth is not about “winning”. It is about fostering environments where diverse perspectives can coexist without hostility.

Accept our marketing cookies to access this content.

These cookies are currently disabled in your browser.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.