This underestimated tool could make AI innovation safer
AI is powering major breakthroughs in industries, including the prediction of protein structures. Image: Wikimedia/AtikaAtikawa
- Artificial intelligence (AI) is powering impressive advances in many industries. Ensuring that AI systems are deployed responsibly is an urgent challenge.
- Certification programmes for AI systems should be a critical part of any regulatory approach.
- They should be developed by subject matter experts, delivered by independent third parties and have auditable trails.
Ensuring that artificial intelligence (AI) systems are deployed responsibly is an urgent challenge. Private investment in AI doubled between 2020 and 2021 and the AI market is expected to expand at a compound annual growth rate of 38.1% until 2030. AI is rapidly powering impressive advances in many industries. For example, DeepMind’s groundbreaking publication of AI predictions of the structures of nearly every known protein is likely to lead to major breakthroughs in drug discovery, pandemic response and agricultural innovation.
However, AI systems can be difficult to interpret and can lead to unpredictable outcomes. AI adoption also has the potential to exacerbate existing power disparities. As the pace of AI adoption increases, lawmakers are working hard to put appropriate safeguards in place. This requires a balance between promoting innovation and reducing harm, an understanding of AI’s effects in countless contexts and a long-term vision to address the “pacing problem” of AI as it advances faster than society’s ability to regulate its impacts.
Certification programmes for AI systems
Certification programmes for AI systems should be a critical part of any regulatory approach to AI as they can help achieve all of these goals. To be authoritative, they should be developed by subject matter experts, delivered by independent third parties and have auditable trails.
Responsible AI deployment means different things in different contexts. A chatbot for health insurance enrolment is very different to a self-driving car. By having an AI system certified, an organization will be able to prove to consumers, business partners and regulators that the system complies with applicable regulations, conforms to appropriate standards and meets responsible AI quality and testing requirements as relevant.
In other fields, certification programmes and other “soft law” mechanisms have successfully supplemented legislation and helped improve transnational standards. Fairtrade certification for coffee assures buyers that coffee bean farmers were paid an appropriate price and conformed to certain social and environmental standards in their farming practices. Dolphin-safe certification for tuna signals compliance with laws and practices devised to prevent the unintended killing of dolphins while fishing for tuna. Leadership in Energy and Environmental Design (LEED) certification provides Platinum, Gold, Silver, and Certified ratings as proof of green building design, construction, and operation.
AI certification programmes could similarly supplement legislation and improve transnational standards for many industries, while also addressing the added complexities of AI systems.
By certifying their automated lending systems, financial institutions could signal to consumers and regulators that the systems are reliable, fair, auditable and able to explain their operations and decisions to loan applicants in plain language. Companies purchasing automated hiring systems may choose only to purchase certified systems to ensure ongoing bias monitoring, a reasonable accommodation process, compliance with laws and meaningful avenues of notification and recourse. Organizations developing applications that use smartphone cameras to automatically screen for skin disease may use certification to show consumers that the solutions are fair, reliable and in alignment with emerging best practices.
Unlocking the societal benefits of AI certification programmes
Given the scale and pace of AI adoption, many of the AI systems being deployed around the world could prove ineffective, unsafe or biased. Without effective and durable regulatory mechanisms – including soft law mechanisms – people, businesses and regulators will not easily be able to distinguish such systems from trustworthy AI systems.
It is time for civil society organizations, companies and lawmakers to consider the potential of responsible AI certification programmes. Civil society organizations can show leadership by developing certification programmes that consider the dynamic nature of AI and by ensuring that they incorporate the interests of marginalized individuals. Corporate leaders are well positioned to provide expertise, access and resources to further the development of independent certification programmes since they are familiar with implementation gaps and best practices for responsible AI adoption.
How is the World Economic Forum ensuring the responsible use of technology?
Lawmakers in the European Union, US and Canada are poised to enact broad legal requirements for AI systems, drawing upon responsible AI principles articulated by the international community. By incorporating certification programmes and other soft law mechanisms as complements to legal requirements for AI systems, lawmakers can ensure that their legislative aims are reflected in requirements for specific AI use cases in different industries. Lawmakers can fund pilots of soft law AI instruments in different industries, think carefully about developing markets for AI certification, accreditation and auditing, and direct government departments to lead by example by requiring or developing certification programmes for public-sector procurement.
There are signs of progress on the regulatory front. An early draft of the EU's proposed AI Act acknowledges a role for soft law mechanisms. According to the draft, aligning AI systems with AI standards developed by European standards organizations could help show compliance with the Act. In December 2021, the UK’s Centre for Data Ethics & Innovation published a roadmap for effective assurance and certification markets for AI systems.
AI’s remarkable and rapidly increasing transformation of our society calls for the adoption of a flexible and durable regulatory response. Certification programmes for AI systems should be a vital element of this response.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Emerging Technologies
The Agenda Weekly
A weekly update of the most important issues driving the global agenda
You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.
More on Emerging TechnologiesSee all
David Elliott
November 25, 2024