Responsible AI governance can be achieved through multistakeholder collaboration
A multistakeholder approach to AI governance is most appropriate for mitigating the technology's risks and capitalizing on its vast possibilities. Image: Getty Images/iStockphoto
- Artificial Intelligence is advancing rapidly and regulatory regimes are struggling to keep up.
- The solution is a collaborative, multistakeholder approach towards addressing the technology's risks and capitalizing on its opportunities.
- To this end, the World Economic Forum’s AI Governance Alliance is convening private and public sector actors to generate concrete action and guide responsible development.
We gather at a critical point in history. Human progress is increasingly interwoven with the influence of algorithms and networks, shaping our existence. The ascent of Artificial Intelligence (AI) is not just a technological leap, but a seismic shift, replete with challenges and opportunities in equal measure.
Because of this, collective oversight of advanced AI is not merely advantageous — it is imperative. Nations are swiftly moving to regulate AI, aiming to both mitigate risks and harness its potential for societal and economic transformation. The global landscape finds itself at a pivotal juncture, balancing rapid technological strides with the pressing need for governance to steer this evolving technology.
Multistakeholder collaboration to guide AI's development
The last few months have been filled with crucial developments in the international cooperation agenda around AI. The UK held its first AI Safety Summit which culminated with 29 countries signing the Bletchley Declaration, including the EU, the US and China, agreeing to work together to ensure AI is designed and deployed responsibly. The EU AI Act is in its last negotiation stages and, when adopted, will establish the most comprehensive framework applicable to the development and use of AI. G7 leaders have agreed on International Guiding Principles on Artificial Intelligence and a voluntary Code of Conduct for AI developers under the Hiroshima AI process. The UN Announced its High-Level Advisory Body on Artificial Intelligence. Also, US President Joe Biden released his Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence which establishes new standards for AI safety and security.
Yet declarations alone, while necessary to raise awareness, are not sufficient. The real work lies in the action that follows. And it is with this intent that the World Economic Forum launched the AI Governance Alliance.
The purpose of the Alliance, which comprises over 200 influential voices from industry, academia, civil society and government, is not only to bring stakeholders together but to further a mutual commitment to act on some of the most pressing issues in AI governance. That includes finding a shared way forward on frontier guardrails to safeguard more prolific AI systems, as well as advancing knowledge at the frontier of applied generative AI across all sectors and industries. Driven by its members, the goal of the Alliance is to devise, co-create and help a broad range of decision-makers around the world enact more adaptive and resilient forms of AI governance.
From discussion to action
Guided by the World Economic Forum’s multistakeholder approach and building on the success of the Responsible AI Leadership Summit, the publication of the Presidio Recommendations, and launch of the AI Governance Alliance, the World Economic Forum’s Centre for the Fourth Industrial Revolution (C4IR) is hosting the AI Governance Summit, where over 180 leaders from the field of AI will gather in an impartial and inclusive summit for knowledge exchange, strategic discussions and most importantly to devise practical and actionable plans.
The Summit programme will offer outlook plenaries on crucial AI topics, global dialogues on the immediate applications of generative AI, strategy sessions to propel initiatives and drive impact and workshops to further existing workstreams of the Alliance.
The insights and proposals generated in these sessions will be pivotal not only in setting a precedent in critical debates and generating real momentum for responsible AI development, but also in spurring world leaders and organizations to action. These key outcomes will shape the future efforts of the Forum's AI Governance Alliance, significantly advancing the conversation ahead of the 2024 Annual Meeting in Davos, where AI will be a focal point.
How is the World Economic Forum creating guardrails for Artificial Intelligence?
The time to act is now
In an ever-evolving AI landscape, the urgency to drive responsible AI development has never been more pressing. Thus, the call to action is clear: the time has come for us, through global collaboration, innovative and practical steps, to collectively guide AI onto a trajectory that fosters ethical and inclusive advances and societal well-being. The Forum and the AI Governance Alliance have heard this call and are committed to working hand-in-hand with the international community to respond to this rapidly advancing technology, mitigating the shared challenges that it brings and capitalizing on its opportunities.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Artificial Intelligence
The Agenda Weekly
A weekly update of the most important issues driving the global agenda
You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.
More on Emerging TechnologiesSee all
Filipe Beato and Jamie Saunders
November 21, 2024