Opinion
Emerging Technologies

Why protecting an open AI ecosystem should be a cornerstone of technology regulation

Artificial Intelligence (AI), open source, technology regulation, collaboration, transparency, accountability.

Technology regulation should be supported by the development of an open AI ecosystem to ensure accountability by businesses and developers. Image: Getty Images/metamorworks

Alessandro Curioni
Vice President Europe and Africa and Director, IBM Research Europe
  • As artificial intelligence (AI) becomes more embedded in our lives, technology regulation will be key to safeguarding people from various ethical and social risks.
  • Regulating a fast-paced industry like technology can be difficult – innovation can quickly outpace new rules.
  • Creating an open AI ecosystem would harness the innovative talent of the global tech community while also ensuring experts can audit development and businesses and developers remain accountable for their innovations.

Artificial intelligence (AI) keeps getting more sophisticated, with the recent popularity of large language models (LLMs) signalling a paradigm shift in its development.

LLMs are built using generative AI to create novel responses to instructions using natural language, code or any other type of data. They can also be used as a basis for many small, specialized models for specific tasks.

But the buzz around LLMs has also unleashed a tide of concerns around the development and deployment of AI. These concerns include a wide variety of ethical and social risks, such as LLMs being abused by bad actors.

The world has been trying to address the risks of AI for some time now – at the company, national and global levels – to ensure that it is secure, safe, trustworthy and fair. This goal underpins the recently launched AI Governance Alliance of the World Economic Forum and the EU’s European AI Alliance, as well as nation-specific initiatives like the Swiss government’s and ETH Zurich’s Pilot Gen AI Redteaming Network.

Now, the EU’s AI Act has been pretty much finalized. It should soon become the world’s first comprehensive set of AI laws to help protect our safety and fundamental rights as this new technology moves closer to becoming embedded in our everyday lives.

Technology regulation is important, but it’s also not enough. Today’s AI field is evolving so quickly that regulation could soon create limitations. So, we need a holistic approach to regulating and governing AI. This should combine technology regulation with efforts to embrace an open and transparent AI ecosystem. This would help developers and researchers discuss and mitigate AI risks early using dynamic governance.

What is dynamic governance?

LLMs are continuously influenced by new data, but specific prompting by the user could create undesirable or unpredictable outcomes. It’s imperative for companies to constantly monitor their models, the data that’s being ingested and the outcomes to ensure everything is working as intended.

Specific governance toolkits – systems and processes that help companies measure their governance performance, such as IBM's watsonx.gov – can help with this by allowing businesses to monitor and improve outcomes on an ongoing basis. This would enable companies to manage risks that arise due to the quality of the training data by improving transparency around how the AI makes decisions and ensuring compliance with technology regulation – even as rules change over time.

Loading...

Such dynamic governance should be used regardless of whether the AI is developed in a closed, proprietary environment or an open AI ecosystem. Of course, some companies opt to keep their AI technology behind closed doors, but embracing open and transparent AI can benefit society, spur innovation and help ensure AI safety. An open approach to innovation tends to democratize access to the most foundational and broadly applicable advances. Openness will also lead to more scrutiny of new tools by the tech community, enhancing safety and security.

In short, openness and transparency harness the innovative talent of the global community while ensuring accountability. We’ve already seen this with the Linux Foundation, an open-source software platform set up in 2007 as a nonprofit to foster technology innovation. Openness allows more scrutiny by the tech community, with auditing widely accessible to experts around the world. Researchers and developers have access to the data needed to address bugs and deal with risks. They can react faster to technical and outcome-related problems.

Creating an open AI community

This is why IBM and Meta have recently formed the AI Alliance. The aim of this collaboration is to work together to foster an open AI community. We should be creating LLMs from the bottom up, in a transparent and responsible way.

While initiatives such as the AI Alliance aim to bring global talent together to democratize the development, deployment and management of AI technology, I'm not saying that all AI should be open. And crucially, open does not mean ungoverned. Governance is equally important, whether we're talking about open or proprietary AI.

We are still in the early days of understanding how AI will transform our society. But we already know that regulating it is crucial. Embracing dynamic governance technology and an open AI ecosystem are vital steps towards regulating it properly.

Have you read?
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Emerging Technologies

Share:
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.

Subscribe today

Here’s why it’s important to build long-term cryptographic resilience

Michele Mosca and Donna Dodson

December 20, 2024

How digital platforms and AI are empowering individual investors

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum