Emerging Technologies

EU, US and UK sign landmark AI safety treaty, and other digital technology stories you need to know

Published · Updated
US, EU and UK representatives have signed a landmark AI safety treaty

Representatives from around the world met in Vilnius to sign the Framework Convention on artificial intelligence, and human rights, democracy, and the rule of law. Image: Council of Europe

Cathy Li
Head, AI, Data and Metaverse; Member of the Executive Committee, World Economic Forum
  • This round-up brings you key digital technology stories from the past fortnight.
  • Top digital technology stories: EU, US, and UK forge historic AI safety agreement; Telegram apologizes to South Korea over deepfake porn scandal; Chatbots could weaken beliefs in conspiracy theories.

1. EU, US and UK reach milestone AI safety pact

The European Union, United States, United Kingdom and several other countries have signed a landmark AI safety treaty – the first legally binding international agreement that intends to align artificial intelligence (AI) systems with democratic values.

The Framework Convention on artificial intelligence and human rights, democracy, and the rule of law was set out by international human rights organization the Council of Europe and signed in Vilnius, Lithuania.

Representatives from around the world met in Vilnius to sign the Framework Convention on artificial intelligence, and human rights, democracy, and the rule of law.
EU, US and UK representatives have signed a landmark AI safety treaty Image: Council of Europe

Many nations are working on legislation to mitigate the risks of AI technology. In the World Economic Forum's Global Risks Report, 'adverse outcomes of AI technologies' was identified as a top ten risk over the next decade, as was misinformation and disinformation.

Andorra, Georgia, Iceland, Norway, the Republic of Moldova, San Marino and Israel also signed the treaty, which provides a legal framework for AI systems, intending to balance progress with human rights and democracy risks.

Council of Europe Secretary General Marija Pejčinović Burić said in a statement: “We must ensure that the rise of AI upholds our standards, rather than undermining them. The Framework Convention is designed to ensure just that.

“It is a strong and balanced text – the result of the open and inclusive approach by which it was drafted and which ensured that it benefits from multiple and expert perspectives.”

Any country can join if it adheres to the framework, which was developed with input from Argentina, Australia, Canada, Japan, Mexico and Uruguay. It takes effect three months after ratification by five signatories, including at least three Council of Europe member states.

The World Economic Forum’s Global Risk Report lists adverse outcomes of AI technology as one of the biggest global risks of the next 10 years.
Adverse outcomes of AI technologies is a top risk in the World Economic Forum Global Risk Report 2024. Image: World Economic Forum Global Risk Report 2024

2. California considers divisive AI regulation bill

California is considering a bill to regulate AI which aims to protect against damage caused by the technology by holding vendors liable. However, it has proved divisive within the technology industry.

Critics of SB 1047 claim it would stifle innovation while proponents say it will make the technology safer.

The bill is currently sat with Governor Gavin Newsom, either to be passed or rejected. However, he revealed in mid-September that he believes the bill has flaws. Speaking with Salesforce CEO Marc Benioff at the 2024 Dreamforce conference, he said: “We’ve been working over the last couple years to come up with some rational regulation that supports risk taking, but not recklessness.

“That’s challenging now in this space, particularly with SB 1047, because of the sort of outsized impact that legislation could have, and the chilling effect, particularly in the open source community.”

Those who have spoken out in favour of the bill include X CEO Elon Musk and Landon Klein, director of US policy at the Future of Lige Institute. Nancy Pelosi and tech advocacy group Chamber of Prgoress have urged Newsom to veto it, with the latter saying the bill is "fundamentally flawed and mistargeted".

Discover

How is the World Economic Forum ensuring the responsible use of technology?

3. News in brief: Digital technology stories from around the world

Google is to roll out capabilities on its Gemini AI image creation model that will allow it to generate images of people. This follows a pause in February due to the technology producing inaccurate depictions of historical events.

Chatbots could weaken people’s beliefs in conspiracy theories, Nature reports. A study recently published in Science found that interacting for a few minutes with a detailed-response chatbot led to lasting shifts in participants' thinking.

Growth in data centres means they are likely to produce around 2.5 billion metric tons of carbon dioxide-equivalent emissions across the world by the end of the decade. Research from Morgan Stanley found that the industry’s greenhouse emissions will equate to around 40% of those of the entire US.

The UK government is to classify data centres as a critical national infrastructure, granting them additional support during major incidents like cyber-attacks, IT outages, or extreme weather. This classification will align data centres with emergency services, finance, healthcare, energy, and water sectors, aiming to minimize disruption.

BlackRock and Microsoft are planning to launch a $30 billion investment fund that aims to support the rising energy demands of AI technology. The power needed by AI innovations far outstrips that of previous technology, placing strains on current infrastructure.

4. More on digital technology on Agenda

While some deepfake videos are easy to spot, others can be much harder to detect. Read this article, to discover some of the tell-tale signs that can help you decipher what’s real and what isn’t.

As investors pour money into companies developing or deploying AI what steps should they take to ensure it's safe and responsible? Radio Davos spoke to the managing director at a billion-dollar investment fund, and co-author of a “playbook for investors”, to find out the questions they should ask.

You can also read the full report here. In it, you can discover the business case for responsible AI, it's potential for mitigating risk and how organistions can clear the hurdles that may stand in the way.

Loading...
Share:
Contents
1. EU, US and UK reach milestone AI safety pact2. California considers divisive AI regulation bill3. News in brief: Digital technology stories from around the world4. More on digital technology on Agenda

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum