This transcript has been generated using speech recognition software and may contain errors. Please check its accuracy against the audio.
Sara Hooker, VP of Research at Cohere and leader Cohere For AI: It's very rare to have a technology which overnight is used by millions of people. When that happens, you have both the excitement of how it's being used in ways that are beneficial and unexpected, but also the brittleness of technology that is used everywhere all at once.
Robin Pomeroy, host, Radio Davos: Welcome to Radio Davos, the podcast from the World Economic Forum that looks at the biggest challenges and how we might solve them. This week, as 2023 draws to a close, what have we learned this year about AI, and how human societies might govern it?
Sabastian Niles, President & Chief Legal Officer, Salesforce: We need sound law and sound public policy to undergird and protect the development of these types of technologies in ways that promote responsible innovation
Robin Pomeroy: The World Economic Forum held not one but two summits on artificial intelligence this year and launched a multi-stakeholder group to look at how policies can catch up with the tech. We hear from some of the experts.
Andrew Ng, Founder, Coursera and DeepLearning.AI: My biggest fear for AI right now is stifling regulation putting a stop to this wonderful progress that otherwise would make so many people in the world have healthier, longer, more fulfilling lives.
Robin Pomeroy: The Forum’s head of AI looks ahead to January’s Davos meeting where AI will be on everyone’s lips.
Cathy Li, Head, AI, Data and Metaverse; Deputy Head, C4IR at World Economic Forum:
Robin Pomeroy: Subscribe to Radio Davos wherever you get your podcasts, or visit wef.ch/podcasts.
I’m Robin Pomeroy at the World Economic Forum, and with this look at AI governance as we head for Davos 2024...
Sabastian Niles: We need to have systems that are even more multilateral, that are even more multi-stakeholder.
Robin Pomeroy: This is Radio Davos.
Robin Pomeroy: Welcome to Radio Davos. And on this episode, which are recording just ahead of the annual meeting 2024 in Davos, we're talking about artificial intelligence, a subject that we've dealt with a lot on Radio Davos over the last year. And I'm joined once again by Kathie Li, who is the World Economic Forum's head of AI, Data and Metaverse. Hi, Cathy. How are you?
Cathy Li: Hi, Robin. Thanks for having me here.
Robin Pomeroy: Thanks for joining us. Now, you and I were at not one but two AI governance summits that the World Economic Forum hosted this year. You're going to tell us today a bit about that and where things are going next. When I was there, I was doing interviews with some of the experts who were there. And during this episode, I'm going to play through just a flavour of some of those interviews.
Let's hear first. I really like this, from Sara Hooker. She leads Cohere for AI, which is a research lab looking at artificial intelligence. And she sets out and reminds us why AI is so important and why the governance of AI is so important.
Sara Hooker: It's very rare to have a technology which overnight is used by millions of people. And so when that happens, you have both the excitement of how it's being used in ways that are beneficial and unexpected, but also the brittleness of technology that is used everywhere all at once.
And so forums like this of critical for bringing together key stakeholders to think about how do we safely deploy, how do we make sure our models are used in responsible ways.
And that's particularly important now where we've had so much momentum in the last year. But there's still a lot of ambiguity and a big technical gap in how stakeholders come together and discuss these problems.
I wouldn't minimise anxiety, because I think it's natural. I think every big technological change, we've had anxiety and some of it has a lot of merits, like, this technology will change how we work, it will change where we spend time. So I think it's important that we have realistic conversations about how we build educational programmes and figure out support for how users use this technology.
For me, the perhaps more sensational notion of like the existential threat, it's less interesting to me because I think there's actually a brittleness to these models today that we need to work on. There's parts of these models that fail right now that feel like a reasonable place to start.
Robin Pomeroy: Sara Hooker from Cohere for AI.
So Cathy Li, why did the World Economic Forum decide to host a second AI Governance Summit? There was one in the spring and those one in November? Why two summits?.
Cathy Li: The organisation of the AI Governance Summit is a response to the global alignment of nations and industries, reflecting a commitment to ensuring the ethical and responsible development of artificial intelligence.
This collective effort is evident in recent milestones. In October, the US government issued a Executive Order on AI, directing action to establish new standards for AI safety and security, protect consumers and workers, and promote innovation and competition. Additionally, the G7 produce an international code of conduct for organisations developing advanced AI systems in the same month, setting essential baselines for frontier AI companies. And more recently, the UK government also hosted an AI safety summit leading to the Bletchley Declaration, a broad statement that calls for multi-stakeholder action to harness the benefits of AI while addressing its risks. The declaration, signed by 28 countries, emphasises the importance of collaboration, including with China and developing nations.
We witnessed a gathering of over 200 influential leaders from the Forum's own AI Governance Alliance, in short AIGA, which is a multi-stakeholder alliance addressing the design, development and deployment of generative AI, which we began in June in the broader air ecosystem.
The focus was on exchanging valuable insights and collaboratively establishing concrete action plans to advance the responsible development and deployment of generative AI on a global scale.
As you mentioned, this event marked a significant milestone following the inaugural Responsible AI Leadership Summit in April, resulting in the publication of the Presidio Recommendations on responsible generative AI and the establishment of the AI Governance Alliance. Participants from various sectors highlighted the vast opportunities of AI integration while emphasising the critical need for responsible development aligned with global ethical standards. Addressing topics such as adaptive regulatory frameworks and harmonised standards.
Robin Pomeroy: So you kind of went through a list there of other meetings that have happened and other moves towards governments or regions like the EU, the UK, the US, China and all around the world, people trying to get to grips with this rapidly changing technology. But I guess what the World Economic Forum does, that maybe some of those other places don't do, is bring together these various stakeholders, the regulators, governments, the companies and academia.
Let's hear another clip from an interview I did at the AI Governance Summit in November. This is actually our host for the first day of that two day summit, which was in the Salesforce Tower in San Francisco. This is Sabastian Niles, who's President and Chief Legal Officer at Salesforce.
Sabastian Niles: I do think that this, the transformation opportunity, that AI brings really for all of society as well as, of course, for governments, for business and for just for communities and just human beings, can only be achieved if we have one strong public and private sector collaboration, but then, much more broadly, bringing a whole swath of diverse and multi-stakeholder voices into the conversation. And that's what I've been seeing already, that from the AI Governance Alliance.
If we're able to use AI to essentially raise the floor for kind of 'what's the minimum acceptable level' of, whatever sort of product of solution, impact, maybe reducing mistakes that can occur.
Obviously, look at health care. Not everyone always has the best access, to sort of X, Y, or Z. But I think AI has this, if we can't do it right, and if we lead with trust and if we lead with inclusion, and think about equality and sustainability and all these items around innovation and really embrace stakeholder success, as we look at AI, I think we can both raise the floor and improve both business outcomes, human outcomes, societal outcomes, civil society outcomes, but also achieve the really powerful moonshot goals too.
We need sound law and sound public policy to undergird and protect the development of these types of technologies in ways that promote responsible innovation
We need, I think it's right, new governance frameworks that are agile, that are nimble, right? Just like companies have to have cultures that are deeply innovative and able to learn fast, respond fast, adjust fast. We need to have systems that are even more multilateral. They're even more multi-stakeholder.
Robin Pomeroy: That was Sabastian Niles, who's President and Chief Legal Officer at Salesforce.
Cathy, what were some of the most important themes and key takeaways that emerged from the summit, do you think?
Cathy Li: Robin, you had a really good point. What differentiates the Forum's AI Governance Alliance and the summits we put together has always been first and foremost that we're community based. And second, you know, thanks to our agility, we're able to pursue some of the most urgent and needed actions on the ground in a very timely manner.
And there was no difference with the summit that happened in November. So over the three days, many topics emerged with crucial themes centring on the importance of adopting a global perspective that extends beyond technologically advanced nations.
One significant focus was on ensuring inclusive benefits of AI development extending to both developed and developing countries. Bridging the digital divide became a central topic, with participants advocating for increased access to critical infrastructure like data cloud services and compute alongside essential foundations for improved training and education.
Key takeaways included the need for clear definitions and thoughtful consideration in the open source and innovation debate. Promoting public-private collaboration for global access to digital resources, and advancing AI governance through adaptive regulations, harmonised standards and ongoing international discussions.
Robin Pomeroy: One of the things you mentioned there was the digital divide, the concern that AI could become a rich country or rich person's tool, and that needs addressing the digital divide. Millions, if not billions, of people don't have access even to the basic internet right now, and that risks getting worse.
And then there are lots of policy debates going on. One of them was highlighted by the next soundbite we'll hear. This is Andrew Ng, who's the founder of Coursera and of DeepLearning.AI. He was making the point very strongly on one of the issues that you raised, which was about open source. And I think he explains what that means. He has a very strong opinion - one side - there are opinions, the other side. But let's hear what he has to say. This is Andrew Ng.
Andrew Ng: My biggest fear for AI right now is stifling regulation putting a stop to this wonderful progress that otherwise would make so many people in the world have healthier, longer, more fulfilling lives.
AI technology is very powerful, but is a general purpose technology, meaning it's not useful for one thing like ChatGPT or Bard, it’s helping healthcare systems improve how they process medical records, it’s making processing of legal documents more efficient, it’s helping customer service and customer aspirations, and on and on and on.
And so while individual applications have risks and should be regulated, so if you want to sell a medical device, well, let's make sure that’s safe. If you build a self-driving car, that needs to be regulated. If you have an underwriting system to make loans, well, let's make sure we know how to check that's not biased.
So when you think about AI applications, those have concrete risks and I think deserves regulators scrutiny and transparency and regulation.
Where the danger is is regulation of the raw technology, because we know from economics, if you want less of something, then you regulate it or throw out friction. And so if you want there to be less intelligence in the world, then by all means throw out friction to slow down AI's progress. But I think that's a huge mistake. The world would be better off if there was more intelligence with it in it.
When you think about AI, think about electricity: tons of use cases to be worked out, and, yes, it can electrocute people, it can spark dangerous fires. But today I think none of us would give up heat, refrigeration and lighting for fear of electrocution. And I think so, too, would be for AI. There is a number of harmful use cases, but we're making it safer every day, and regulating the applications - sound regulations to regulate that - will help us move forward. But flawed regulations to slow down the technology development, that would be a huge mistake.
Robin Pomeroy: Andrew Ng, founder of Coursera and of DeepLearning.AI. So, Cathy, what did this summit achieve, do you think? And what will the next steps be?
Cathy Li: The AI Governance Summit achieved the formulation of practical plans for the accountable and inclusive development of generative AI technology.
The next steps involve consolidating and sharing those plans at the Annual Meeting in Davos in January 2024, through the publication of our first report and some
of the tensions that you just alluded to earlier, including the debate between open and closed source models. As far as the near-term risks versus long term risks will all be debated in Davos as well. The expectation is that those initiatives will guide further collaborative efforts and actions within the governance allies and the broader ecosystem to ensure responsible and ethical advancements in the field of artificial intelligence.
Robin Pomeroy: I don't there's any doubt that at Davos, which as we record, is just a few weeks away, AI is really everyone's going to be talking about it. I don't think there's any doubt about that in a way that possibly has never happened at other Davoses. I know it's been an issue. It was last, last time, but I think this one is really going to be a headline issue. So very interesting to look out for.
Let's have one more clip from one of the interviews I did at that summit. This is from Dubai, Khalfan Belhoul, who is the Chief Executive Officer at the Dubai Future Foundation, an agency that tries to promote fields such as artificial intelligence, Khalfan Belhoul.
Khalfan Belhoul, Chief Executive Officer, Dubai Future Foundation: Artificial intelligence by the name is not something that you can actually govern. You can govern the sub-effects or the sectors that artificial intelligence can affect. And if you take them on a case by case basis, this is the best way to actually create some kind of a policy.
But the biggest challenge is how do you unify those those policies and set best practices and standards and then apply them on a global basis to ensure that everyone can use AI in the best way possible.
If the Alliance focuses on, first of all, step one, getting in the right voices in the room and coming up with an aggregated plan that has all those views in it and then converts those into action items.
And maybe many people sometimes criticise those large convenings, that they are all about conversation without action. But when you try to convert those actions through this Alliance, I would probably say the first action would be some kind of a tangible use case or a pilot project that can be an example for the world whereupon success of this project it can be gradually standardised.
And like I said, with artificial intelligence specifically, you would need to focus on a specific sector. For example, how can I impact the media sector and what kind of content can we use? How will we use that content? Once that's done, then you can gradually jump into different sectors.
Robin Pomeroy: That was Khalfan Belhoul CEO of the Dubai Future Foundation.
Well, Cathy, thanks for joining us on Radio Davos. And I hope to bump into you. I'm sure you're going to be very busy there, but I hope to bump into you in the corridors, and have you back in to see what happened there and see where we're going next on AI.
Cathy Li: Thanks, Robin, and looking forward to sharing more from Davos on AI.
Robin Pomeroy: Cathy Li, head of AI at the World Economic Forum. Thanks for joining us on Radio Davos.
To find out more about the AI Governance Alliance visit the website: wef.ch/AIGA. And listen back to our mini-series on generative AI from earlier this year - it’s in the Radio Davos feed on your podcast app, or visit wef.ch/podcasts, where you’ll find all our podcasts, including Linda Lacina’s weekly Meet the Leader.
If you like Radio Davos, please take a moment to leave us a rating. And join the conversation on the World Economic Forum Podcast Club on Facebook.
This episode of Radio Davos was written and presented by me, Robin Pomeroy. Studio production was by Taz Kelleher.
We will be back next week, but for now thanks to you for listening and goodbye.