Scroll down for full podcast transcript - click the ‘Show more’ arrow
If 2023 was the year we all got familiar with generative AI, is 2024 the year when governments will act on the governance of this powerful technology?
At Davos 2024 we spoke to these experts, from the industry and civil society:
Alexandra Reeve Givens, CEO, Center for Democracy & Technology
Aidan Gomez, Co-founder and CEO of Cohere
Anna Makanju, Vice President of Global Affairs, OpenAI
Catch up on all the action from Davos at wef.ch/wef24 and across social media using the hashtag #WEF24.
World Economic Forum's AI Governance Alliance: wef.ch/AIGA
Related podcasts:
Check out all our podcasts on wef.ch/podcasts:
Podcast transcript
This transcript has been generated using speech recognition software and may contain errors. Please check its accuracy against the audio.
Alexandra Reeve Givens, CEO, Center for Democracy & Technology: We can all agree - easy sentence - AI systems shouldn't be biased. They shouldn't discriminate. What does that look like in practice? This is the year where people are actually going to have to answer those questions in detail.
Robin Pomeroy, host, Radio Davos: Welcome to Radio Davos, the podcast from the World Economic Forum that looks at the biggest challenges and how we might solve them. This week: if 2023 was the year we all got familiar with generative AI, is 2024 the year when governments will act on the governance of this powerful technology?
Alexandra Reeve Givens: We have to move into the action phase. It will be a key thing to watch in 2024.
Robin Pomeroy: We hear from the head of global affairs at ChatGPT's OpenAI.
Anna Makanju, Vice President of Global Affairs, OpenAI: We want a global regime that includes every country that thinks about catastrophic risk. Sam often talks about it as the IAEA model for AI.
Robin Pomeroy: So what are global leaders saying to Sam Altman?
Anna Makanju: Every world leader wants to understand how to harness this technology for the benefit of their country while putting in place guardrails, and people are on different ends of the spectrum about which one they prioritize.
Robin Pomeroy: AI has an almost limitless number of applications, so how can those guardrails work across such a broad spectrum of activities?
Anna Makanju: What are the very specific guardrails a model should have to prevent it inflicting certain specific harms?
Alexandra Reeve Givens: I think you should forbid any person from ever saying something about, 'we need to do this for AI', without specifying what use cases they're talking about.
Aidan Gomez, CEO, Cohere: We have to protect the little guy. There has to be new ideas. There has to be a new generation of thinkers building and contributing. That needs to be a top priority for the regulators.
Robin Pomeroy: Subscribe to Radio Davos wherever you get your podcasts, or visit wef.ch/podcasts where you will also find our sister programmes, Meet the Leader and Agenda Dialogues.
I’m Robin Pomeroy at the World Economic Forum, and with views from Davos on how AI can and should be governed…
Anna Makanju: This is an incredibly challenging moment for the world.
Robin Pomeroy: This is Radio Davos
At the World Economic Forum’s Annual Meeting last month artificial intelligence was very much the issue of the day. I managed to grab several interviews with people shaping the future of AI.
Later in the show, Anna Makanju, Vice President of Global Affairs at OpenAI, the company that brought us ChatGPT.
Including her in their Top100 people in AI, Time magazine said: “There’s a good chance that whatever AI regulations emerge across the world in the next few years, Anna Makanju will have left her fingerprints on them.”
We'll also hear from a young Silicon Valley CEO, Aidan Gomez of Cohere, to get an insider's view of the conversations among the tech companies about regulation.
First, I speak to Alexandra Reeve Givens, CEO of the Center for Democracy & Technology, a nonprofit organization based in Washington and Brussels that advocates, as they put it, “to protect civil rights and civil liberties in the digital age”.
I asked the head of the Center for Democracy & Technology how important was the potential impact of technology on democracy.
Alexandra Reeve Givens: Deeply important. We can think about many different ways that AI is impacting democracy. One of the themes we're hearing a lot about this week in Davos is mis and disinformation, deepfakes and the impact on elections. So that's one big piece of this.
But also we have to think about economic inequality and the role that that plays in a democracy and the survival of democracy in the long term.
For AI, that raises questions not only of job displacement, which is one of the themes we're hearing about, but also how decisions are made about people: who gets access to a loan, who's chosen for a job, whether someone gets approved for public benefits or not.
AI is seeping into all of those systems, and has for the past several years in ways that policymakers and companies alike really have to pay attention to.
Robin Pomeroy: Where do you feel things are going in terms of policymaking and governance? Are there tribal lines and is it seen as a binary thing, or are we going to get through these conversations some kind of consensus that will get us that governance, those guardrails, is the cliche word people use, that we need that will work for everyone?
Alexandra Reeve Givens: So it's my job to be an optimist as a as a public interest advocate. But one of the things that I'm truly optimistic about is it feels like we're in a moment where the lines have not yet been drawn between one faction or the other.
This is one of the refreshing areas of policy dialogue where companies, governments and civil society alike are calling for action and truly, I think, are engaged in good faith discussion about what that action looks like.
Now, of course, we have to go from high level discussion to actual rubber meets the road laws being written, policies being adopted, new designs being deployed by companies.
So we have to move into the action phase. And that's just starting to happen now, and I think will be a key thing to watch in 2024.
But the good news is it doesn't feel tribalistic just yet. I think the real question is what steps should we be taking and how do we get there quickly?
Robin Pomeroy: That's interesting because I had an impression, coming very much from the outside to look at this issue, that there were those kind of lines, that you have very kind of libertarian Silicon Valley types saying, just leave us alone we're the smart guys, we can deal with this. And you've got a lot of people who are very scared about AI, they are saying 'shut it down'. We had a call this time last year for a six-month moratorium. It seemed a little bit polarized back then.
Alexandra Reeve Givens: Yes. So sure, there's a libertarian manifesto that made the rounds, but I don't think that's reflective of where most corporate leaders are, particularly the ones that take governance seriously and their social responsibility seriously, which I like to think is still the majority, particularly of mainstream companies.
So there I think the real question is how do we go from them saying we believe in responsible AI to actually saying, great, how do you operationalize that in practice, and how do we make sure that your definition of responsible AI isn't just something that escapes human control many years down the line, but actually is respecting people's rights and freedoms and economic opportunity now?
And that's the real discussion, is how do we get concrete on those commitments?
Robin Pomeroy: Is that one of the dividing lines in itself, that this idea of short term and long term, the long term worriers who think we're going to get the killer robots or whatever, these super beings who are smarter than us. That there's a risk of ignoring the short term.
Alexandra Reeve Givens: Yes. That was a big divide in 2023, was about whether or not we focus on long term risks or short term risks. I would like to think we've reached the maturity model now, with people realizing we can and must do both at once, and that actually some of the interventions are exactly the same.
So if you worry about long term risks of AI either escaping human control or being used for bio, nuclear terrorism, some of the same mechanisms which are transparency, accountability mechanisms, who is designing those products and who are they consulting with as they do it, are actually the same types of interventions that those of us who think about near-term human rights harms are focused on as well.
And so I hope we're seeing a little bit more of a convergence and less of a dichotomy in the field. And instead of it being, oh, do you care about X risk or do you care about current risk, it's actually about what are the interventions that are going to help address those harms, and how do we start making progress on those?
Robin Pomeroy: Okay. Well let's talk about those interventions then. How do you go about intervening in technologies like this that's so widespread and it's changing so rapidly all the time. Maybe we could start with the point you raised about the misinformation, disinformation, which indeed came at the very top of the World Economic Forum's Global Risk Report. In the short term risks, the next 1 to 2 years, misinformation, disinformation. I don't know where it appeared in previous years. That, for example, that's a tangible, or is it, I don't know.
Alexandra Reeve Givens: Definitely a current risk.
Robin Pomeroy: How would you go about an intervention in that in a policymaking way?
Alexandra Reeve Givens: Well, it's a good example to focus on, because that illustrates that there's going to be no one silver bullet and it's not one actor.
So there's a role for legislation, but also, in the realm of mis and disinformation, you run up on free speech pretty quickly. You can't just ban deepfakes or manipulated images. That becomes very, very hard when you think about the expressive purposes why people might want to, you know, Photoshop an image, for example.
And so instead, what we have to think about is this hybrid solution. So let's look at individual use cases. When a deepfake is being used to extort somebody, to defame somebody, to spread nonconsensual sexual imagery about somebody, to manipulate an election - let's make sure there are legal interventions at that point that directly address that harm and hold that person responsible.
But then also let's move up the chain. So for general purpose AI systems, where obviously there are going to be good uses as well as harmful uses, what are the interventions that they can put in to try and regulate what those downstream bad uses might be? So what are their usage policies? What are their content policies? What triggers a red flag when somebody is using their system for them to then interrogate what that use is, and maybe cancel that person's subscription or their access?
Those are the types of questions we can be asking at the companies higher up in the stack, so that technology is still widely available to people, but we are putting guardrails on that help establish more responsible uses.
Robin Pomeroy: Does that though sort of rely on the goodwill of the company? Because we've had social media for a couple of decades, developing over time. And they had, I think, initially, again, it was a Wild West and it was a libertarian thing of, we're just a platform and people can have this open town square. And then the big social media companies have approached moderation of that content in different ways, which is such a live issue now, isn't it, and it was through the pandemic. And you've got, obviously, Elon Musk's X, which he's made more libertarian, and there are legitimate arguments in favour of that. But in the framework that we have at the moment, there seems to be a lot of reliance on that goodwill. And you're hoping that the companies will put that in place. And this is a field where you could have maverick companies or just individuals producing mass scale disinformation. And that's really hard to stop or to guard against, isn't it?
Alexandra Reeve Givens: It is. You can create mechanisms that increase company accountability, though. And I think transparency and risk management frameworks is one of the clear places to start.
Again, it's not a silver bullet. It's not going to solve everything. But when we're thinking about low hanging fruit, so what are their content policies? What are their usage policies? How are they developing them? How are they enforcing them? Are they actually enforcing them with consistency and looking at whether or not somebody is using their system for mass deception campaigns, for example, if that becomes more common down the road?
There's some low hanging fruit around what kind of mechanisms of accountability look like that would allow governments to steer clear of legislating, you know, your tool may produce this information, it may not produce that information. Governments get very worried about that for real reasons around free expression. But still establish these norms for what responsible processes look like that allow for better governance to ultimately win the day.
Robin Pomeroy: You're immersed in the Washington policy, I was about to say bubble, but that sounds insulting, in that very active policymaking city, which obviously, decisions made there have reverberations around the world. But to what extent is the governance of AI genuinely an international thing? Because you've had the EU producing legislation on this, and you've had China legislating about it, and you've had talks here at the World Economic Forum, but also the United Nations and elsewhere, very much on a global scale.
Is it is it going to be country by country, putting this into into place? Is it important that it's cross-border? And how do you see that happening?
Alexandra Reeve Givens: Yes. So again we're going to need an all of the above strategy. So my organization, CDT, is based in Washington and also in Brussels. And those are two places where regional or national legislation is going to be hugely important. That's where you can get specific on use cases. You can fold things into existing laws.
So for example, if an AI system is being used to discriminate in the course of hiring somebody, national employment laws should apply, the exact same way that if a biased person in HR was vetting somebody with an unfair standard.
So there's an essential role for that type of specific, geographically bound integration with existing law.
But also we need international cooperation, and we need it for a couple of different reasons.
One is that these companies, of course, transcend borders. The technology transcends borders. And also a meaningful enforcement regime is going to be one where there's harmonization so that we can begin to say what good looks like and have some of the same language, at least even if on a normative scale we land in slightly different places.
So that's where vehicles like the EU-US Trade and Technology Council, even other forums that have been doing a lot of work on standardization. So what is the language that we use? What are the metrics for testing and evaluating an AI system? What are the rubrics that we use?
That can be incredibly productive at this early stage so that we get better, clearer benchmarking, and it's easier for the companies to have a more uniform approach that they then can toggle to comply with local and national laws.
Robin Pomeroy: Do we need a global treaty or something? You think about climate change or you think about the aviation industry, which has IATA, or the nuclear energy industry which has certain global regulations. Can that really apply to AI?
Alexandra Reeve Givens: I think you should forbid any person from ever saying something about 'we need to do this for AI' without specifying what use cases they're talking about. Because an international treaty might be hugely important when we think about AI and some of the long term safety risks, when we think about AI and its use for autonomous weapons, for example, there are places where international coordination is going to be essential to make sure that rights and safety are protected.
Then again, there are those more local and applied issues where really we're talking about enforcement of existing civil rights standards or quality standards that do make sense more on the local or national level. And the international work to be done there is more about harmonization, threat detection, information sharing, and less about one binding global norm.
So again, we have to be specific on these use cases and then toggle our different modes of intervention appropriately to respond to each one.
Robin Pomeroy: Democracy and technology. You are working in a democracy or in several democracies, as you are also in the European Union. Does AI pose a risk of enabling authoritarian governments to become more authoritarian, and is there anything that can be done about that?
Alexandra Reeve Givens: Oh my goodness, without question, there are real risks about how AI is going to exacerbate the power of authoritarian regimes and make it even harder to protect people's individual rights.
We can look at this from a surveillance state purpose. So facial recognition is AI. AI is what powers that, is what makes it more coordinated. We're seeing governments now that integrate the provision of public benefits and public services through an AI system.
So for example, there were famous examples in Iran where they were using face recognition to enforce the hijab laws. And at one point a minister even said, if you are in violation of that, we might dock your benefits - through an integrated system.
So those are the types of concerns that one can easily imagine.
And then, of course, there are concerns around access to truth and information. So even what are the norms through which a generative AI system's content policies are being written when OpenAI moves into a country with a questionable human rights standards or different approaches to what information can and cannot be surfaced?
So we have to have a real conversation around this. One of the areas that I'm hoping we see progress on in 2024 is much more public accountability for how the AI companies that are based in global democracies are thinking about the human rights consequences when they move into and enter deals with countries that do not have strong human rights records.
There are the human rights guiding principles on this, the UN guiding principles on this. There are mechanisms through which these companies must be doing human rights impact assessments, should be talking about that publicly, should be doing it with external accountability and civil society oversight. And so far, that conversation has been completely lacking.
When people think about the governance of AI, they often throw up their hands and say, these are these massive big picture questions. There are really tangible things we could be doing right now to, say, what is the human rights impact analysis that you're doing and how do we hold you accountable to that when you enter into business with a new regime? And that's one of the areas, just as an example, where we could be making progress right away.
Robin Pomeroy: You're meeting companies here in Davos. Do you think they're becoming aware of that and taking it seriously?
Alexandra Reeve Givens: Yes. Because this isn't new. There have been frameworks already for tech companies, for cloud service providers to decide whether or not to do business with a particular regime.
Now, have we fixed that and is it always done completely right? No, but at least there are frameworks and there's a language that we use around what those expectations are.
So again, we're at the maturity phase, where the AI companies can't just say, oh we're new, we're still figuring this out anymore. They are at a level of sophistication. They're on the global stage. They're at places like Davos. They have to be taking that responsibility seriously too.
And I think they understand it. It's just a question of making sure that this is as much of a priority as the next fundraising round or the next innovation, series of releases are going to be.
Robin Pomeroy: I was following a session earlier today about AI and AI governance. Everyone seemed to be repeating this idea that when governments don't understand something, the knee jerk response is to ban it, or restrict it in some way that that's not a very clever way of doing things, and that governments still need to learn what AI is before they can really do proper governance. Do you think we're at that stage or have we moved beyond that?
Alexandra Reeve Givens: I find that a bit of a false argument, to be honest. It's tempting, of course, for technologists, and I say that both for the companies and for public interest organizations like mine, to say, oh, government is slow, government doesn't understand. That's an easy excuse, to say let's spend the next two years educating policymakers on this rather than having them act on it.
So of course, policymakers have to be educated, but let's give them the credit where credit is due, that many of them are ramping up very quickly. They know who to call, they know who to consult when they're writing a piece of legislation or writing a policy. And so we cannot just say, let's take a breath and educate. We have to say, let's have informed policymaking action. But they can walk and chew gum at the same time.
Robin Pomeroy: What do you think are the next steps this year, in 2024? Are they going to be some things that everyone should be looking out for, if you're user or a maker of AI?
Alexandra Reeve Givens: 2024 is a big year to move from high level principles into action.
The EU AI Act, they have their agreement, they're going to be starting to really work out the details of what implementation looks like. And of course, that's where the rubber hits the road.
And then in the U.S. you have, not only conversations around legislating, but really where the action is focused is implementation of President Biden's AI Executive Order, which is agencies across the government, across sectors, all issuing detailed guidance about what responsible use of AI looks like in their sectors, and grappling with hard questions around government's own use of AI too.
All of those things require policymakers and the companies that are going to be impacted to get really specific on what good looks like.
Just to give an example, we can all agree - easy sentence - AI systems shouldn't be biased. They shouldn't discriminate. What does that look like in practice? What are you testing for? How are you doing that testing? And what happens to you if you don't meet the threshold of that test? This is the year where people are actually going to have to answer those questions in detail.
And I hope that means that it's a year of progress, because it's going to be in hashing through those details that we really figure out what is measurable, what is fixable, and what's the right accountability regime to make sure that people are taking that responsibility seriously.
Robin Pomeroy: You started this conversation by saying you're an optimist, and I think you're talking on the governance side of things and being able to control AI rather than having it control us. But I wonder if you're also optimistic on the promise of AI as well. Are there things you think AI can do that excite you that make you think, at least here, this is definitely going to be good for us?
Alexandra Reeve Givens: Without question. And I actually one of the things I worry about is this false binary - that if you're a critique of the risks of AI, it suddenly means that you're a downer on the technology. And I am absolutely not. I think it's going to have a huge power to transform the way that we work, the way that we communicate.
You think about the medical advances and so many other ways in which it really is going to just drive human innovation forward.
I think the watchword for me is what's responsible rights respecting innovation. And to me, there's this massive opportunity for us to try and get that right, to say we're at this cusp, we're at this threshold where, again, the companies are saying the right things about wanting this innovation to be harnessed and used in a productive and responsible way. We just have to fill in the details of what that responsible governance looks like.
And that really is a secret to success, to showing of course we can move forward with innovation, but we can do it in a way that lifts up everybody and respects and protects everybody as we do so.
Robin Pomeroy: Alexandra Reeve Givens, CEO of the Center for Democracy & Technology, speaking to me in Davos, where I also met our next guest, Aidan Gomez, founder and CEO of the AI company Cohere.
What does the head of a Silicon Valley company like that think about the governance of AI?
Aidan Gomez: I think that governance is needed. We definitely need better policy and regulation. I think the way that we get there is incredibly important.
It's tough to regulate a horizontal technology like language. Language is - it impacts every single vertical, every single industry, wherever you have more than one human, there's language happening. And so it's the definition of horizontal in general.
I don't think we should be regulating it in a horizontal layer. I think we should be regulating it in a vertical layer and helping the existing policymakers, the existing regulators, get smart on generative AI, its impact to their domain of expertise, and help them, empower them, to mitigate the risks in their context.
I've been really, I'm quite optimistic about the policy discussions that are going on right now. I think I was nervous last year that we would get fear of Terminators and these sort of like sci fi narratives around AI. I think that's been cooled. I think people are actually interacting with the technology themselves. They see its limitations. They realize this isn't some sort of sci fi novel. And so they're becoming much more practical, and it's turning into a conversation of, okay, what use cases are we okay with? What do we need to protect against? And how do we do that? How do we build a framework to do that?
So I'm optimistic we're headed in a good direction. And there seems to be really multilateral collaboration. Everyone's invested in this technology going well for humanity.
Robin Pomeroy: Within the community of computer scientists and engineers and people who are building and the companies that are investing so much money and creating these things, with those people who you must meet all the time, is there a consensus on that kind of thing, do you think, or are there clear dividing lines of: some people say, leave us alone, we don't want any regulation. Some people say, let's look at the killer robots and make sure we've got something there. I've heard some computer scientists talking, not exactly that kind of language. Are there different camps in your field?
Aidan Gomez: Absolutely. And lots of, you know, bitter academic debates about what we should be doing.
I think there's loads of different takes. Everyone has the same end goal, which is that this goes well. How you do that is hotly contested. Although I feel like we're finding alignment. I feel like we're finding some sort of middle ground between the extremes of 'we should not work on this technology', you know, 'shut it all down', and then the other direction, which is 'we need no regulation, do whatever you want with the technology, and just throw it out there'.
I think we're finding the right middle ground.
There are things to be nervous about. For instance, we need healthy, competitive markets. And this technology requires a lot of resources in order to build. And so that lends itself to the incumbents being able to use regulation, plus their channel modes, their access to capital, to block out a new generation of thinkers, builders. And we need to make sure that regulation doesn't aid that. Regulation should actually fight back against that and keep markets dynamic and allow innovation.
Robin Pomeroy: How would you achieve that? What kind of policies would achieve that?
Aidan Gomez: So you don't want to be too onerous on the little guys, guys that are just coming up trying to build new experiences, trying to innovate, otherwise you're going to entrench value in the companies that have 100-person legal teams, which can meet whatever regulation you come up with.
So you need to make sure that regulation takes into account who you're regulating, their scale, it doesn't block out the little guy.
There are lots of stories where that's failed, regulation has genuinely cut out small companies from being able to compete in an environment.
And some regulation is good, it doesn't do that. It actually empowers the little guys to, you know, get ahead and work with the regulator more closely than the big ones, and get to a place where they're self-sustaining and able to play on the field.
Within AI, it's such a potent, powerful technology, I think that we have to protect the little guy. There has to be new ideas. There has to be a new generation of thinkers building and contributing. And so that needs to be a top priority for the regulators.
Robin Pomeroy: What's your feeling about global governance? Some people say the airline industry has treaties and organizations. I'm expecting you're saying, because you don't think it should be horizontal, anything global maybe, does it have to be in those kind of verticals, in the actual applications, or the certain fields? Do we need no international...?
Aidan Gomez: We need international coordination. We need to come to a consensus on what we want to do globally.
Whether there needs to be an actual international regulator. I'm kind of skeptical of that because, to begin with, regulating AI, AI is such a general platform. It's such a general horizontal technology, it's very, very difficult to regulate in those terms. It's actually kind of ill defined to begin with. You don't know, like, what is a regulation on AI? It's so abstract and so general that there's not enough scoping.
But if you ask, how about AI applied to health care, then you can start concrete things. You know what? We can't take doctors out of the loop. We need human oversight. We'll build a regulation which says any output from any AI system needs to be reviewed by a human doctor before it actually impacts a patient. You can actually get concrete there.
So I think it's a bit, it might be a fool's errand to pursue the horizontal regulation of a hyper general technology like AI.
Robin Pomeroy: Aidan Gomez is CEO of Cohere. You can hear more from that interview in our episode about the pioneers of AI, that's on your Radio Davos feed.
One company that everyone wanted to meet in Davos was OpenAI - they brought you ChatGPT not so long ago. You can hear from its CEO, Sam Altman, speaking on a panel discussion at Davos, on our sister podcast Agenda Dialogues.
To get into the weeds on governance and policy, I caught up in Davos with OpenAI's Vice President of Global Affairs.
Anna Makanju: Hi, I'm Anna Makanju, and I lead the global affairs team at OpenAI.
Robin Pomeroy: So Anna, you've worked, correct me if I'm wrong, in the White House, NATO, the United Nations, as well as the corporate sector. Just wondering, how does working in a company like OpenAI compare to any of those?
Anna Makanju: What's remarkable is how similar a lot of the work has been, because this issue has become so central to many questions, including geopolitical questions. So there is a great deal of similarity that I didn't necessarily anticipate.
Robin Pomeroy: You've been traveling the world, as you mentioned before the interview, the headline says 'with Sam Altman', because he's the man of the moment globally, particularly here in Davos. But when you've been having discussions with heads of state and leaders of various stripes, what are the conversations people want to have with OpenAI, with you.
Anna Makanju: A lot of people want to know what the future will look like, which we sort of, we have a glimpse, that's about five months ahead because we sort of know where the research is leading, what kinds of things AI will be able to do that we can't do currently. But at the same time, we are not able to anticipate the exact way that this technology is going to be incorporated, because now that it is out in the world, people are doing things with it that we didn't anticipate, that are often really incredible and creative. And also it's just becoming more and more integrated in society.
But basically every world leader wants to understand how to harness this technology for the benefit of their country, while putting in place guardrails. And people are on different ends of the spectrum about which one they prioritize and put more weight on. But this is kind of the same theme in all of these conversations.
Robin Pomeroy: The guardrails. It's such a complex issue, but is there, you must have an elevator pitch on: this should be the approach to guardrails and to governance. Do you have that?
Anna Makanju: A lot of people think that we'd have an elevator pitch. I think the one thing that we're very confident in is that we want a global regime that includes every country that thinks about catastrophic risk. And, you know, Sam often talks about it as the IAEA model for AI.
Robin Pomeroy: That's the [International] Atomic Energy Agency.
Anna Makanju: Yes. And and we feel relatively optimistic that no one wants an AI system that's going to harm a huge number of people or be uncontrollable. So we do think that it's possible to have such an agreement.
But in general, I think it is quite difficult to have an overarching regime that has thought about every single way that the system could impact every single sector.
So I think for the most part, we believe that something like what the Executive Order in the United States does, where it tasks every agency to think about, how does that agency implement this, how does that agency have guardrails for what they do, makes sense.
Robin Pomeroy: So those agencies are looking at certain sectors, industries, applications if you like.
But a global, you talk about an IAEA. Any idea how that would work? IAEA is covering an industry which is potentially very dangerous - nuclear power - but it's fairly clear what it's for, what its misuse or what accidents could be. It's not clear yet, is it really, what could be the catastrophic risks [of AI]? How would you see that organization or agency working. What would you see it doing?
Anna Makanju: We actually are doing quite a bit of research to understand and make more specific what risks there are, because I agree, to date this conversation has been fairly theoretical.
But we have a preparedness team that looks at this very question, what are some of the most serious risks that might arise.
And the way, you know, nuclear power is also incredibly beneficial. And so similarly, this agency I can imagine both thinking about what are the very specific guardrails a model should have to prevent it from inflicting certain specific harms. But also, how do you distribute the benefits of this technology, whether that means having access to compute power for every nation that is part of this regime, or having tailored models. I think there is a range of ways you can think about what benefits people would be entitled to if they are also implementing the guardrails.
Robin Pomeroy: We're at a moment of disharmony when it comes to geopolitics. How easy is it to discuss a global agreement with some kind of global consensus on issue that is new to most of us and is complex, at a time when you've got, I don't need to spell out some of the geopolitical fault lines. How are those conversations going, bearing that in mind?
Anna Makanju: I can't say that we are, I'm not saying that we're very advanced in this discussion, but I feel relatively optimistic. In the Cold War, we were still able to have these kinds of discussions, and we were able to put in place regimes of inspection for nuclear sites. So we know from precedents that for something where everyone is concerned about the impacts of a technology because it is in each nation's own interests that this can be done.
Robin Pomeroy: Do you go on the record and say what those catastrophic events might be? I mean, could you give us an idea of what Sam Altman, what keeps him up at night?
Anna Makanju: It is unfortunate because I do think we spend so much time only focusing on catastrophic risks and downsides. And I should first say that I think we are building this technology because we believe in the tremendous benefit and upside that it can have for people everywhere. So this was only in terms of what I think the regulatory intervention, where things might go with regulatory intervention. Not necessarily this is the entirety or even the primary focus that we have. It's just something that we think it's important to think about.
And I completely agree with you. There's still a lot of work to be done. But, you know, some of it is can a rogue system go off and set off nuclear weapons? These are the kinds of things that you can imagine.
Robin Pomeroy: What about this kind of self-regulation model? Because you formed this thing called the Frontier Model Forum. Could you tell us what that is and what it aims to do?
Anna Makanju: The idea there was right now, every company, of course, thinks about what does it mean to release a model safely. I believe virtually every company now does red teaming, which is something that we've done with our models to make sure we can understand what some of the immediate risks of the model can be. How can it be misused? How can we mitigate those risks?
And there are all kinds of things we do. We all we constantly talk about in the industry, responsible deployment, safe models. No one knows what that means practically. And so the idea here was that each company does something different. We don't even have the same vocabulary. How can we align as an industry of what are actual best practices on safety?
And although it is a self-regulatory model, this type of approach is something we've heard a lot of governments ask for because they are talking to companies individually, and in the end, they're not actually sure what any of us are doing.
Robin Pomeroy: So it was created less than a year ago.
Anna Makanju: It's been just a few months, actually.
Robin Pomeroy: Okay. And remind us, the companies involved in it.
Anna Makanju: Right now it's Microsoft, OpenAI, Anthropic and Google. But this year I'm sure there will be new members as well.
Robin Pomeroy: I, as a journalist, have been covering industries in various different sectors over the years, and you often get sectors or companies saying, 'we know what's best for us. We'll self-regulate and we'll deliver. And you don't need to worry about regulation or interference'.
And there has been some resistance from some parts of Silicon Valley saying, no, leave us alone, we know how to do this best. Do you get criticism that you're banding together to self-regulate in some way to avoid regulation?
Anna Makanju: So the FMF is really not about regulation. It is about identifying a common set of practices. And I see it not as avoiding regulation, but actually feeding into regulation, because regulation needs to understand where should we set the thresholds.
For example, in the Executive Order, they were really trying to find a place, you know, we need some reporting requirements on training runs, but where's the threshold? Because for a smaller company or a startup it may be very challenging to comply. So we want to set a threshold so it doesn't burden companies who are not resourced enough to do this. And because their products are not actually dangerous enough to require these disclosures.
So the FMF is really meant to feed into regulation where there is an information asymmetry. At the end of the day, the companies that are building these models know the most about them. They have the most granular understanding of all the different aspects.
And I don't think anyone should trust companies to self-regulate. But I do believe that it's necessary to have this dialogue, to have regulation that's actually going to be robust.
Robin Pomeroy: I think you just answered what was going to be my next question. Concerns from startups, from smaller companies, that you've now got some of these very big names with huge budgets for research and development, that they could get excluded. Are they right to be concerned, or do you think, the way things are going, there will be, there has to be doesn't there, fertile ground for startups for innovation?
Anna Makanju: There are two things. One, actually, we have seen an incredible explosion of startups and small companies doing this work and finding market share. So I don't know why that would be a concern because the actual evidence has been to the contrary.
But also, I know that there has been often this regulatory capture narrative. I mean, obviously, it's funny because I think it's like a lose-lose because if you're not arguing for regulation, then you're trying to avoid consequences. If you're arguing for regulation, it means you're trying to pull up the ladder behind you.
But that's why we've actually been very clear that we think the regulatory burdens should accrue on companies that are, because right now in order to build a model at the next generation you need to have tens, if not hundreds, of millions of dollars in chips, data centers, the talent, I mean, it is an incredibly resource and dollar intensive endeavor. And so if you have that, then I don't really think that we should be concerned about the burden of also meeting some regulatory requirements. And this shouldn't impact small businesses or lower resourced companies.
Robin Pomeroy: Where do you see the year going in AI? Can you predict? Are people already saying, by the summer we're going to have this. And, I don't necessarily mean with your own company, but are there things you're looking forward to in the year, but also probably closer to your own heart in the regulatory or the governance process, are there things that people who are following this should be looking out for and ready for in 2024?
Anna Makanju: It's quite likely we will see more capable models that can do more coming out this year. But that's not even what will transform the world the most. It's the fact that... I think I saw some statistic that ChatGPT, even though it's on the front page of every paper on earth every single day, the number of people actually using it is quite low, people who have engaged with it or tried it.
And I think that's going to change this year. A lot more people are going to be using AI tools. AI is going to be integrated into a lot more workflows at every company. There are going to be AI tools that interact with each other. So I think just the proliferation, the novel use and the increase in use is going to lead to some dramatic changes.
And in terms of governance. as much as people talk about the AI Act agreement, we still don't really know what's in the details. And for people building these tools, we're really waiting to understand what are we actually going to be implementing?
Robin Pomeroy: This is the European legislation.
Anna Makanju: Yes, the European AI Act, which will be, I think, the only really comprehensive AI law of 2024. It doesn't seem like anyone else is close to something. I mean, the US may do something, but we'll see. I think attention is really shifting now to elections.
Robin Pomeroy: It's funny that, isn't it? You've got, it's mostly American companies developing these things and releasing them. But it's the legislators in Europe that are taking that side of things on. Any reflection on why that's happening, whether that's a good or a bad thing?
Anna Makanju: To be honest, to someone who appreciates democracy, I think it is a remarkable thing that you can have people from so many different countries come together and agree on a piece of incredibly controversial and complex legislation. Regardless of how it will impact us, it is in many ways encouraging. So I think in that sense it is a good thing. But again, I don't really know what the details will look like. So let me talk to you in a couple of months about the law itself.
Robin Pomeroy: Beyond the US and the EU. there's the rest of the world, which includes China particularly. And you've talked in this interview about some kind of global governance structure. How do you see China and everywhere else in the world coming together at some point in the future.
Anna Makanju: So it's interesting. You know, you you may have seen this, but China's been incredibly active in the regulatory space domestically. And they've also been very engaged at the UN because the UN is running a process now where they're really trying to look at what might a governance structure through the United Nations look like.
And so this is a question that China's incredibly interested in. And I know that they are likely interested in more than just the catastrophic risk piece. But they've done a great deal of thinking about this issue.
So I do think that there are venues and there is a reason to be somewhat optimistic that there will be some movement towards a global agreement.
Robin Pomeroy: Can I ask you, just on a personal level, do you use AI and what do you use it for?
Anna Makanju: I use it quite a bit, although it's funny, you know, maybe not as much as one would anticipate.
One of my absolute favorite uses is you can toss a PDF into ChatGPT and just ask it questions. So whenever I get, you know, a 600 page draft piece of legislation, that's the first thing I do, just say, summarize the main ideas. Tell me what, is this covered? Is this issue in there? And it saves me so much time.
Robin Pomeroy: So in the past you might have done a keyword search in a text document.
Anna Makanju: But this is much better because it's not just, even if the word is not in there, the AI understands if the concept is discussed.
This was actually even possible with GPT-3, even if it wasn't as effective. But I remember doing this demo on the America COMPETES Act, which is a enormously long piece of legislation. And so, you know, we asked it, is there anything in here about a game played with sticks on grass? And it said, well, yes, it does talk about golf clubs needing to be a certain diameter.
Robin Pomeroy: Is there a book you would recommend? Doesn't need to be about AI. It could be anything.
Anna Makanju: Well, I'll tell you two. I very sadly, rarely have time for books now, but I just finished Chip War. Excellent, and I think...
Robin Pomeroy: Whose author was just on Radio Davos a few months ago.
Anna Makanju: Oh, okay.
Robin Pomeroy: Well, if you've read the book you probably don't need to listen.
Anna Makanju: Nonetheless, I may do so. And then I recently reread The Left Hand of Darkness, which is by Ursula Le Guin, one of the first, if not the first female winner of the Hugo Award, which is the main science fiction award. And it's just one of the, it's aged amazingly well. It's just such a remarkably wonderful book if you want fiction.
Robin Pomeroy: Anna Makanju, Vice President of Global Affairs at OpenAI. Before her you heard Aidan Gomez of Cohere and Alexandra Reeve Givens of the Center for Democracy & Technology.
The World Economic Forum is working to bring industry leaders, governments, academic institutions, and civil society organizations together to work for responsible global design and release of transparent and inclusive AI systems. Find out more on the website, search for the AI Governance Alliance.
We have lots of episodes on AI, all available on the Radio Davos feed wherever you are listening to this. And all our podcasts are at wef.ch/podcasts. And join us on the World Economic Forum Podcast Club -- that's on Facebook.
This episode of Radio Davos was written and presented by me, Robin Pomeroy, sound engineering in Davos was by Juan Toran. Studio production was by Taz Kelleher.
We will be back next week, but for now thanks to you for listening and goodbye.
Michele Mosca and Donna Dodson
December 20, 2024