AI business transformation is here. Now we need coherent AI governance
The US and the EU are independently working to regulate AI – perhaps they should be working together. Image: Pixabay
- The rapid evolution of AI technologies has the potential to transform industries and drive meaningful economic change.
- To realize the full benefits, efforts to regulate the technology should be globally coordinated.
- Technological advancements are key to confronting global challenges – this requires innovation and guardrails. Outcomes-based policy approaches have the best potential to allow both.
Over the past year, artificial intelligence (AI) has been recognized as a critical component of the future growth of many businesses, with the potential to transform entire industries and produce socially and economically meaningful technological change. While some have seen the value of AI for decades, a growing number of companies are now looking to leverage new generative AI tools to grow their businesses.
AI use cases are moving beyond just answering customer service questions to include, for example, helping radiologists spot cancer, identifying ways that companies and consumers can reduce greenhouse gas emissions, and providing customer insights to small businesses to help build more effective products. As these new technologies evolve, it is important to implement the kinds of guardrails that keep users safe, while supporting innovation that drives businesses and economies forward.
Any period of change can bring with it both opportunities and challenges. As the first network to use AI to spot fraud and protect payments, Visa has seen the value and impact of these technologies since 1993. With its ability to analyze transaction flows and flag suspicious behaviour, AI is a powerful tool for identifying unusual patterns and preventing fraud in payments. And it is hugely effective; our AI and predictive machine learning capabilities helped prevent an estimated $27 billion in fraud-related losses in 2022 alone.
Have you read?
Of course, fraudsters are savvy at using new technologies to their advantage, too. They have begun leveraging Large Language Models (LLMs) and other types of generative AI to draft phishing emails that are harder and harder to spot, and using voice cloning tools and deepfakes to gain access to bank accounts and commit fraud. Banks and payments networks will need to continuously innovate to stay two steps ahead.
Achieving AI regulatory alignment
Already, policy-makers in several countries have started to develop their own AI governance frameworks: In the United States, for instance, the National Institute of Standards and Technology (NIST) released the AI Risk Management Framework in January 2023, and the White House issued an Executive Order on the safe, secure and trustworthy development and use of AI. The EU is in the process of finalizing the AI Act and also introduced the AI Pact, a voluntary industry commitment in anticipation of the AI Act. The government of Singapore has been working to enhance AI governance and recently launched a testing toolkit known as AI Verify.
While these policy-makers – and others – are focusing on some of the same key policy drivers, such as safety and security, governments have so far developed their frameworks somewhat independently of one another. Given the highly interconnected world we live in, uncoordinated regulatory efforts can result in duplicative or conflicting requirements, and, consequently, suboptimal outcomes for businesses and consumers.
Data localization requirements, for instance, can harm the accuracy and performance of AI systems. The fraud detection tools mentioned earlier rely on free-flowing global data. Without consistent mechanisms to enable secure, trusted data flows across jurisdictions, sharing information and deploying cutting-edge technologies to protect consumers and merchants around the world becomes challenging. In these ways, even data localization initiatives intended to protect consumer data can ultimately result in consumers, companies and governments bearing the cost of decreased productivity, interconnectedness, technological progress and growth.
To stay ahead of the fraudsters currently using generative AI tools and continue to explore the benefits of AI, many of which we have not even uncovered to date, the private sector will need to keep innovating. In the financial services industry, banks and financial technology companies are partnering to unlock the benefits of AI to streamline business processes and deliver superior value to end users. Beyond financial services, AI can also help unlock new capabilities to address other global challenges, such as the effects of climate change. New AI applications are rapidly emerging in areas such as energy distribution, agriculture, and weather and disaster prediction.
We are optimistic about these opportunities. However, regulatory fragmentation that produces conflicting or inconsistent obligations across jurisdictions threatens to stifle the innovation on which we will all depend. To help address this, some international forums and organizations have started to consider these issues. For example, the G7 Hiroshima Process, UK AI Safety Summit, and WEF AI Governance Summit provide opportunities to advance the dialogue on AI towards ensuring that governance frameworks align across borders.
Outcomes vs. processes
As use cases for AI rapidly expand, how do we ensure that consumers and businesses – all of us, really – get the most out of them? By focusing on use case-specific, outcomes-focused AI standards, risk management frameworks and risk-based policies. Good outcomes, as a guiding principle, should be the benchmark for AI policy approaches and are more likely than prescriptive, process-based frameworks to mitigate potential risks while allowing for innovation and agile development.
Developers and deployers of AI technologies are already subject to regulation across different areas, including consumer protection, data privacy, intellectual property, competition law, product and safety liability and sector-specific regulations. Before policy-makers seek to enact new laws or regulations, a gap analysis of existing laws and regulations should be used to determine what can be applied or adapted. A holistic review can help ensure that new frameworks complement and close gaps in existing laws, rather than create conflicting or duplicative requirements.
How is the World Economic Forum creating guardrails for Artificial Intelligence?
Because AI technologies have the potential to touch every industry, every community, every business and every person, policy approaches should be designed to support good outcomes and beneficial AI advancement for interconnected societies and economies. At the end of the day, international cooperation on AI standards and policies can help all of us operate flexibly and efficiently, keeping pace with evolving technology and industry best practices.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Artificial Intelligence
Related topics:
The Agenda Weekly
A weekly update of the most important issues driving the global agenda
You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.
More on Emerging TechnologiesSee all
Filipe Beato and Jamie Saunders
November 21, 2024