There are many competing perspectives on how AI should be governed. How can organizations build a discerning process to prevent blind spots when it comes to governing these models?
Khalfan Belhoul, Chief Executive Officer, Dubai Future Foundation
Alexandra Reeve Givens, Chief Executive Officer, Center for Democracy and Technology
David Robinson, Head of Policy Planning, OpenAI (pictured)
Xue Lan, Professor; Dean, Schwarzman College, Tsinghua University
Ian Bremmer, President, Eurasia Group (moderator)
This is the full audio from a session at the World Economic Forum’s AI Governance Summit 2023, on 15 November, 2023.
Watch the session here:
Find out about the AI Governance Alliance: https://initiatives.weforum.org/ai-governance-alliance/home
Check out all our podcasts on wef.ch/podcasts:
Join the World Economic Forum Podcast Club
Podcast transcript
This transcript has been generated using speech recognition software and may contain errors. Please check its accuracy against the audio.
Ian Bremmer, President, Eurasia Group: Hello, everybody. I'm Ian Bremmer, president of Eurasia Group. And we have a fantastic panel and a very important topic for you today: the New Age of Governance of Gen AI.
And of course, when you're talking about the World Economic Forum, you really want perspectives from everywhere. We're certainly giving you that today.
From my left Khalfan Belhoul, who is the Chief Executive Officer of Dubai Future Foundation in the Emirates. Alexandra Reeve Givens, the CEO of the Center for Democracy and Technology in Washington, DC. Xue Lan, professor and Dean at Schwarzman College at Xinhua from mainland China, and David Robinson, who is Head of Policy Planning at OpenAI, basically down the street. So, there you have it.
This is a fascinating topic. It is the most fast-moving topic in terms of governance and geopolitics that I have experienced in my professional career and our panel is going to try to help us navigate where it's going.
Usually I think about opportunities, I think about risks and then I think about governance sort of in that order. But I want to start with just a couple of moments from David, at the end, because since it is moving so fast and since you are one of the foundational players in the space, can you tell us in terms of where AI is right now and where it's going really soon? Not three months ago, not six months ago right now, and in the future, what are the things that are coming down the pike that you think are going to matter? We need to pay attention to that will play in how governance needs to respond?
David Robinson, Head of Policy Planning, OpenAI: Well, it's a good question, Ian, and thank you and thank everyone for coming together to have this important conversation.
I think there are, first of all, from OpenAI’s point of view, you know, there are all these governance conversations, as you say, about risks. But the place we always want to begin is with the benefits that motivate us to build this stuff in the first place. Right?
So, for us, our mission is artificial general intelligence to benefit all of humanity and in a safe way. We think we think about safety, but we also think about getting it out there into the world. That's why we originally launched ChatGPT, it was we said, here is something that is powerful. We're working to make it safe, but we also want to see what it's doing in the world.
And that's something that we've continued. So if you think about, for example, the announcements we just made, I guess now last week at our developer conference, just by way of illustration of kind of a direction of travel, it's that things are getting easier to use. There's less sort of picking from a menu. You have different modalities getting combined, so you can take a picture and you can show it and it'll analyze the picture. You can talk now to ChatGPT in your pocket and there is a coming together of different kinds of experiences where underneath of that is kind of one engine that's intelligence.
And I think, you know, when we think about how OpenAI’s work gets into the world, everybody knows ChatGPT right, which is this app that we can each use personally. But if anything, the equal or larger opportunity is to provide that kind of intelligence, that kind of capability as a service that others can build upon.
So we have you'll hear the letters API. That means that a developer who's working in some setting like a hospital or in a business and wants to do something specific can harness that engine. And as that happens, you're going to see not just a chat app, but intelligence infusing all kinds of experiences.
So even though we've had the technology, I think what we're on now is an adoption curve where there's a bit of a lag time for people to build the ways in which this engine is going to be connected to our lives.
Ian Bremmer: I remember there was a big question in the previous presidential debate about like, are you prepared for the call at 2 a.m. when there is suddenly a serious crisis? If you were to get a call at 2 a.m. that really were to worry you, what would it be about? What do you think that call is in the AI space?
David Robinson: You know, I'll tell you what we just launched a preparedness team to think about, which basically you could construe as an answer to that question, which is.
Ian Bremmer: Being very generous, apparently.
David Robinson: Well, okay, fair enough.
Look, the worst risks keep us up at night, right? That's what that red phone analogy is about. And, you know, we have for a long time been thinking hard about what are called CBRN risks, which are chemical, biological, radiological, nuclear types of risks where, you know, could this be helpful to someone who wanted to do something terrible?
The answer is not yet. And we're working hard to make sure that that does not happen. But it is certainly something that we think hard about.
And then the other thing you wanted present and future and not the past, the other piece of the future that we are working on very, very actively and in fact have devoted 20% of our compute to is what's known as the alignment problem, which means making sure that these very, very smart technologies, which in our view are foreseeably going to become smarter than we are, remain under our control. That's what our alignment team is.
Ian Bremmer: Thank you. That was actually a pretty darn good answer to that question. So going with that, the proliferation issue and risk and the alignment issue and risk, both of which are things China is thinking a lot about. So, Lan, how do you think those two should be addressed, can be addressed? And is China leading the way? Does it intend to lead the way?
Xue Lan: Well, I think if you look at China's development process, you know, China actually started a, you know, AI plan in 2017.
I think basically China is really trying to work both, you know, top down and bottom up.
Top down is that within that AI developing plan and there were actually already these concerns about potential risks. And so actually, if you look at the so called the measures to, you know, support the development and the governance of the AI plan, the first thing is saying that we're going to develop a settlement in the regulations and legislation to ensure the safe use of the technology.
But at the same time, if you look at the process of how actually Chinese governance of AI has evolved, it really actually has pretty much is a very adaptive process, really. You know, sort of having various regulations based on more specific domain specific, you know, areas.
For example, data governance, algorithm governance, and also in the application domains, for example, medical or whatever. So I think that's sort of, you know, at the top level, there are AI ethical principles but also bottom up there are more specific regulations. So those two are kind of merging, you know, from top down to bottom up.
I think that's sort of how the Chinese approach in addressing some of the concerns that people have talked about that.
Ian Bremmer: Question I wrestle with and I'm wondering how you would respond to it is like, do you, as the Chinese engage in AI again, at the public policy level, do you see more of an opportunity that the roll out of AI will allow a centrally planned economy to become really efficient? Or do you see more opportunity that top-down data surveillance and metrics will allow for greater political stability of the present system. Which is the bigger opportunity for a country like China?
Xue Lan, Professor; Dean, Schwarzman College, Tsinghua University: Well, I think for China, China sees AI no different to other technologies that really provides huge opportunities for generating benefits for people and therefore for the society.
So, China actually is, you know, in many ways is really pushing for the innovation and for the development and deployment of AI technology. So it's, you know, AI has already been used very widely in Chinese society and generated huge benefits.
But at the same time, of course, people are do you know, have concerns about the potential risks, about the invasion of privacy and about many other things. So, I think the government has to respond.
So I think that's to go through the kind of a you know, you use it but also, I think when you have some problems and then the government have the regulations to come on it. And I think it's this kind of interaction and adaptation that really push the technology to the - I like the words adoption and diffusion. And so that's what we see. AI has already been diffused into many different areas.
Ian Bremmer: But what I'm really getting at: China is a global leader in AI. A lot of resources, a lot of capable people, a lot of, you know, government focus, a lot of corporates. I'm asking, do see AI as providing more support and strength in the future for the Chinese economic model or the Chinese political model? Where do you think it's actually going to have more impact?
Xue Lan: I think, on both. I think the AI is a tool that indeed they generate the huge economic numbers. But also I think indeed also helps the, you know, the governance and also, of course, presents a governance challenge. So I think there I think that you have to wrestle with both.
Ian Bremmer: You clearly have not gotten yourself in trouble with that answer. So okay, we're good. We're good.
I may now turn to Khalfan because from your perspective, right, I mean, new country. You've got, you know, all sorts of people that you just, you know, go into space. You weren't thinking about that really ten years ago. Crypto everywhere I'm seeing, you know, I mean, you're committed to it. And now AI - it's kind of like venture capital approach, right? So how much time are you really spending on governance and how much is that just, hey, everybody, we're open for business. Come on in.
Khalfan Belhoul, Chief Executive Officer, Dubai Future Foundation: Okay. First of all, good to be here. I mean, great to be with you. I'm honoured to a panellist. I can't really see the audience because the light, but great to be here with you all.
Ian Bremmer: They love you. Honestly.
Khalfan Belhoul: I love them back. I love them back. I can't see you.
You've summarised it well, when it comes to the Dubai and the UAE, literally, we feel, as government officials, we literally feel like we're working with leaders that are entrepreneurs and that are wearing a venture capitalist kind of hat.
What I mean by this is there's so much delegation of risk taking, there's so much acceptance of failure, there's so much - we've gone through such a fast journey in such a short period of time. Country is barely 50 years old, we were heavily dependent on pearl trading in the sixties. Then, of course, oil discovery was there. Fast forward, we're sitting here now in San Francisco discussing AI, discussing space and discussing blockchain six, seven years ago.
So, I think the idea and this is a segue way on what Dubai Futures Foundation is all about, which is also a segue way to our discussion today. So if you can just give me maybe a few moments to-
Ian Bremmer: Go crazy.
Khalfan Belhoul: I won't, I'll try not to, but. DFF, the Dubai Future Foundation, simply gives you an idea of how the DNA of the country functions.
It's a partnership we have with the World Economic Forum. Six years ago there was a gathering in the UAE which you have attended Ian, the World Government Summit. It's a global convening of government leaders and executives from all over the world. And there was a small immersive experience within this summit, which was probably the size of this room, and it was an immersive experience of the future-relevant topics. And back then the ideas were the future of food, applying robotics, exploring space. And it only took one visit of our leadership to actually walk through that small immersive experience where there were leaders from all over the world discussing those important topics.
I remember His Highness Sheikh Mohammed walking in from one side, coming out from the other door and saying: “Hold on, this conversation cannot be confined to the delegates of the summit. This process of thinking about the future should be institutionalized. We need to have a process for this.” So, he comes out of the other side of this immersive experience, announces Dubai Future Foundation, appoints a team chaired by the Crown Prince of Dubai. The board has ministers and leaders across all sectors and I'll come back to that. And the beautiful thing is then once the CEO was appointed, he had to figure out how this all functions. Right? And that's the beauty of our leaders: they come up with the seed of a vision and then we need to come up with a way on how this works.
What I'm trying to reach is, you have the engine that's accepting risk and you create a creative platform to test new ideas and you have access to government agencies.
Now, when it comes to AI, it's the same thing. We just connect the innovators, we connect the government leaders, we connect the funding mechanisms and we try to work out those solutions.
And I don't think the solutions, whoever tells me that AI, and we have the experts here, of course, but whoever tells me that AI can be governed in general, I think is mistaken. I think AI is an enablement across all sectors, and the only way to actually understand how to govern it, Ian, is to go through specific use cases where it really applies and what sectors and get in the experts across those sectors with involvement of regulators, investors, entrepreneurs, like I said, and the right financing mechanism and the right speed, you get those on board and then you can figure out solutions.
Easier said than done though.
Ian Bremmer: Thank you for that. We're definitely going to get into governance. But before we do, I want to give Alex a chance to talk specifically about AI and society, AI and democracy.
And we talked a little bit before the panel and you said you were most interested in, I really want to give you a chance to do this, to talk about specific examples, because so often I am in rooms talking about and we are really at 100,000 feet and people want to know like, how is it affecting my life? What are the opportunities, what are the dangers as we roll this out at breakneck speed, everyone's adopting it. It's not going to have global governance tomorrow. The technology is going to move faster than the institutions. So, what does that mean concretely for the work you're doing?
Alexandra Reeve Givens, Chief Executive Officer, Center for Democracy and Technology: Sure. So I think it's helpful to get specific, right, because it's nice to think about countries being a testbed for innovation, but also governments have an obligation to think about the rights of the people living within their borders and around the world.
And there are really concrete harms that we need to think about in a serious way. So I'm going to interpret this as your version of the what's the 2 a.m. phone call that keeps me up at night leading a democracy organization?
First of all, in terms of level setting, you should never let anybody have a conversation about AI without saying, what are you talking about and what do you mean. So I'm going to try and do that by saying I'm not speaking just about generative AI. This is not just about what OpenAI is doing, but other AI uses as well, particularly where AI is being used to make decisions today that impact people's rights and their lives.
So I think about kind of three big buckets that we need to really focus on.
One is how AI is impacting people's access to economic opportunity and potentially deepening social economic divides. When you have a technology that functions by learning from existing data sets and identifying patterns and then making decisions based upon the patterns that it sees in existing data sets, that is a recipe for replicating existing social inequality.
We're seeing that when AI is used in decisions about who gets a job in hiring recommendations, for example. If you don't design that well and you train it just on a dataset of who is currently at the company, you're going to replicate existing social harms.
But we can also think about this in terms of health care systems as they are based on existing training sets, how we make sure that they are working for the communities and the people that are not well represented in the datasets and having that embedded from the very beginning.
A second set of concerns is around people's individual freedoms and their rights. And this is particularly an issue where governments around the world, including here in the US, use AI as part of their surveillance and their policing in their law enforcement capacities. We can think about this in the realm of face recognition technology, again used in systems around the world. We can think about it in terms of predictive policing and where resources are going. We can think about it in terms of people's social media communications or their online browsing habits, feeding into government interventions, government surveillance. That is all powered and enabled and will be increasingly enabled by growing AI capabilities. We need governments to be accountable when they're using the tech in this way.
The third and final bucket that I'll touch on is informational harms. And this is a big one because we live in a connected society and there's so much benefit that comes from this. But at the same time we have to think about how AI recommendation systems, this is where generative AI comes in as well, can impact the way in which we access information around the world and the way in which we communicate with one another.
So you can think about this in terms of representational harms. When you ask a generative AI, a company to write a story or to create an image for you, what story is it telling and how is that story showing up? Is it able to tell a story about a same sex couple? Is it able to generate an image of somebody who is a CEO and have diversity in how that CEO is represented?
When you think about mis- and disinformation and the growing risk of deepfakes, we already know that access to reliable information in our connected age is a challenge. We've seen that play out in the United States at home as well as in countries around the world.
Now, it's not just easy to create a deepfake you can do it at scale, right? So we can easily make misleading representations about a political figure or a news event. And it doesn't require sophisticated computer skills. It's really easy just to do that at the click of a button. It is easy to not only do it once, but tie it through a coordinated campaign where it can look like different actors are generating similar images. So it creates an even better kind of indicators of truth.
Now, the solution isn't to ban the technology, right? There are plenty of good reasons and good uses that this tech can be put to, but it tells you why governance is so important, and that's governance at the developer level, the companies creating these tools, at the deployer level, the social media platforms and others that are allowing this information out into the ecosystem, and for governments as well to step up and act. How are they boosting trusted sources of information? How are they showing up in this confusing moment to help people pierce through and get the information that they need?
So there are many more things we can talk about. But I think about those three buckets because it kind of crystallizes it.
And what's interesting to me, David and I know each other, we worked together for a long time. There's a really important conversation happening around long-term safety risks around alignment, and that needs to happen.
But the harms that I described right now, every single one of them is happening today. These aren't future. We don't need big safety research institutes to address these harms. We need companies and governments to act now.
So, as you talk about AI, this is why I say you need to push people on what are the harms they're thinking about and what version of AI are they talking about? There's some low hanging fruit we could be going after right now to try and address some of these.
Ian Bremmer: I'm really glad you brought it up that way, because I have very little interest in talking about AGI (Artificial General Intelligence) on this panel. I'm very interested in talking about artificial intelligence right now and like in the next year, because we see the impact.
I see that - you have vaccines and you test them even in a pandemic before people can actually use them. You have genetically modified foods and you're going to test them before you roll them out. Algorithms, when we talk about social media, were rolled out and we're experimenting real time on populations.
Now we're looking at, with the executive order, we're looking at and with the voluntary commitments we saw before that in the White House, we're looking at what we need to do something to make sure that we're testing these models.
But there are lots of ways to test the models, right? You can test them in terms of can they be abused and misused beyond their original intention by bad users or even by other AI bots? What are the implications they have as they're used on children, as they're used on populations?
I'm wondering, as you're at the cutting edge of this technology, where do you think we need to go? Because there are harms that are happening right now. We're already rolling this out, right? The horse has left the stable. What needs to happen? What are your priorities for what we can do to ensure that these algorithms are not causing public harm?
David Robinson: So it's a great question, Ian, and I actually would love to start where Alex left off, which is to say that we think that both these sort of more advanced risks or AGI oriented concerns and the things happening today are essential.
And if you look at the voluntary commitments that we made, look, we recognize we're building this. There's expertise inside labs like ours that is not inside governments today. And so part of our responsibility because again, our mission is safety and, and the benefit together, and we're just, you know — a momentary sidebar just to say that unlike a typical corporate structure where maximizing profit is the goal, we're actually owned by a public charity, as you may know, and we have a fiduciary duty to the mission I've described, even at a cost to our profits. That is baked in too at the staff level to how people think about what we're trying to do.
And so if you look at the voluntary commitments that we made, they span, we promise every time we do a major new model release, we're going to have it red teamed. We and other, you know, firms that have made these commitments. We're going to we're going to organise red teaming, which means having experts kick the tires and we're going to do what's called a transparency reporter, a system card where we say, look, here are the worries we worried about here are the mitigations that we made, here are the problems that still remain to be solved. And we did this most recently with our image Dall-E 3. You can give it some words and it’ll draw you a picture. And part of what we describe there is how we mitigated some of the very bias concerns that Alex just mentioned. So, for example, like demographic diversity and the kinds of people that we depict when we're asked to sort of depict people in various situations, including in leadership roles.
Those are the kinds of things that we take very seriously. And then some of this is about really educating on- we have sort of the base training, the big supercomputer piece where it's patterns from lots of data, and then we have what we call post training, which is where you fine tune it, you teach it to follow instructions, you teach it what kind of an answer you think is a good answer, right? And you sort of steer this intelligence toward the outcome that you want.
And part of what we do when we put out a system card is to educate people about, look, this is how the building works. These are the intervention points. And we hope and expect that there will be, you know, democratic input into that.
And just I guess one last piece is just to say our belief about how to get things right is by actually interacting with the technology. We don't think you can theorize at all in advance. And so we believe in deploying gradually and then learning as we go. And even one of our research initiatives is to get people more people, not just kind of experts in San Francisco, but people around the world doing that more and kind of giving us more input into what it should do.
Ian Bremmer: So Khalfan, when I hear this, one of the things that has allowed the Emirates to be successful is building a culture of trust, international trust, that, you know, when you do business in Dubai, contracts are actually going to be stood up. When you think about AI, how do you build trust in the context of both an environment where data's going to be controlled on high, but also where people need to understand that they're going to be able to behave in ways that are, you know, sort of acceptable to them long term?
Khalfan Belhoul: Yeah, that's a great question. And I mean and I totally agree. And I've also enjoyed the previous session where there was a lot of focus on trust as well.
And I think there's two major sides and there's no other way. Going forward, I think first of all, I start with the point that you mentioned about collaborating and understanding and working jointly.
I think the world, Ian, has thrown us so many signs, so many signs when it comes to across all issues around the world, whether it's economical, geopolitical or pandemics or the opportunity from the digital world that going forward, there's no other way but to actually unite and solve things together. So that's inevitable.
And the only way forward to solve this is to actually work together. When it comes to trust, that’s also something that we will have to pay much more attention to.
And the best example is maybe something I shared with you offline Ian and with my fellow panellists. I mean, we've went through the pandemic and there was obviously major challenges and hitting major economical drivers for the country. But it was the trust factor that once we opened up because clearly, I mean, locking down for such a long period of time isn't sustainable. But in the beginning it was the health and safety was our top priority. But post that phase of really raising the awareness of the pandemic, we had to open up and we opened up and you saw how much the trust was handed over and people were abiding by the rules 95%, which was followed obviously by the vaccine rollout at that time. And now you look at the numbers, we're even better than 2019.
But that's a small sign. When it comes to AI, obviously, again, much easier said than done, but there's no way forward other than creating a trust mechanism in a way where people can feel responsible and liable whenever they share information or wherever they share the wrong information. This is the only way forward. So they can benefit from AI and the systems can work properly.
Now to another point that has been mentioned, the best way to achieve that is to really have constant conversations. And we of course enjoy this with the World Economic Forum through different partnerships through the Centre for the Fourth Industrial Revolution and different events that we have.
But at Dubai Future Foundation, we also have an annual convening called the Dubai Future Forum. We invite futurists from all over the world. We have panels and topics and we had a specific assembly for generative AI. It was called the GenAI Assembly. That happened three weeks ago. And it's just more conversations, more toolkits and pilot projects and involving everyone.
Let's also I mean, the point you mentioned was extremely important, the access to the right infrastructure, the right technology. If we leverage the data in the right way, we will realize that not everyone is fortunate enough to get access to the values of artificial intelligence. But if you look at it in a positive way, AI can, if deployed in the right way, can actually solve for this, we can actually figure out where are the gaps in the world and where are the needs and where the world can really have to pay more attention.
Ian Bremmer: So Lan, when we talk about trust, the United States and China finally with the summit meeting on Friday, these are two governments that have very little trust for each other. And yet the announcement of a track 1.5 on artificial intelligence seems to be one of the positive breakthroughs that we're going to see between these two leaders.
What needs to happen? Where are the areas specifically of AI conversation where the Americans and Chinese might be able to build some trust.
Xue Lan: Well, I think first of all, I think let's go before 2018. I think there's a lot of trust between US and China. If you look at the academic collaborations, US scholars and Chinese scholars, they published more joint papers than any other collaboration of governments. You know that.
And also, in terms of if you look at China's AI business development, there's a lot of venture capitalists of US source or whatever other source, you know, went into Chinese AI development. So there were a lot of collaborations and a lot of trust.
I think since 2018, because of the US sanctions on the Chinese tech areas and that, you know, began to block those kind of collaborations.
So I think now with this kind of summit meeting and hopefully that began to show the willingness to collaborate on that.
And I think certainly in many areas this there are sort of common interests to work together. One example I see, you know, how to prevent military competition, you know, arms competition in AI. And I think there it is certainly in the huge interests of coming together. And also, of course, there are also many other issues related to the business development and how actually there could be collaborations to unleash the huge potentials that might exist.
So I think there I think that there could be multiple ways that the US and China can work together.
Ian Bremmer: With the present export control regime that the United States has on semiconductors and related ecosystem. If that persists, is it still possible to build the cooperation that you're talking about?
Xue Lan: Well, I think that, first of all, I think that, if that persists, it will not only just harm the Chinese development, but also will harm the US AI development with the semiconductor industry, if they develop all those chips that they couldn't sell to the Chinese market, they will suffer as well.
So I think that the whole global semiconductor industry will also suffer. So, I think that's probably the first thing.
The second thing is that is indeed, I think if that's the case, certainly will force the Chinese AI developers to find their own ways to try to find ways to develop their own. So I think that certainly would happen over the time.
So I think that, there are some US,and other countries have already had this kind of a regime in a so-called the Wassenaar Agreements that blocks tech transfer from the Western countries to China and other countries. So I think that regime let's just leave that alone. And so we don't touch that. But certainly in the commercial areas, there are huge potentials for collaboration rather than having this kind of sanctions.
One thing I wanted to clarify: I think there seems to be a misconception that China is in competition with US on trying to achieve AI supremacy in the world. I think if you look at if you go to the Chinese market and if you go to the Chinese industry, I think people don't worry about so much about the competition with the US. People are really concerned about how to actually we can develop the best technology to be used in various areas, you know, in medical service, in agriculture, in the environment. I think that's what people are actually concerned about. I don't think the you know, companies are so much interested in competing with the US on that.
Ian Bremmer: It's certainly the fact that this is going to be a track 1.5 and not a track 1 should be helpful in addressing that point. Whether it's successful is another question.
So, Alex, we've had a bunch of different perspectives here. I want to open the aperture a little. We've got the AI Act in the EU. We've got a high level panel from the UN. We now have an Executive Order from the United States. Arguably, those are three of the most significant kind of directional orientations we have in Western AI governance. And then, of course, you have what the Chinese are doing domestically.
Talk to me a little bit about who do you think — I know they're different — but they do have different cultural orientations, different priorities, different focus - tell me who you think at this early stage is getting it most right, most wrong, and why?
Alexandra Reeve Givens: Yes, it's a great question. And this ties to the issue of trust, right? Because the most meaningful way to have trust is actually to bake in rules of the road and protections that people can know and rely on.
The Europeans are moving forward with the AI Act. Of course legislation is going to be the most comprehensive and the most baked. They can regulate private sector behaviour.
In the US, so far, the Biden administration is being cabin just to the powers of the executive branch, so they can issue guidance to enforce existing laws, but they're not adding on new legal obligations, that's going to have to come through Congress acting.
But all of those conversations are hugely important.
When you think about it from a trust perspective, what is it that got us comfortable driving on the roads at high speed? It doesn't work just for one car manufacturer to say we have best practices. Here's what we do. You need all of them to have rules of the road, to have basic protections that you can trust on, and for there to be a surrounding ecosystem of traffic lanes and stoplights that we all know and understand so that that ecosystem can function together well.
So as I view this, we're going to have to have a combination of legislation that protects people's rights, that bakes in some of the fundamentals. And then because legislation moves slowly and when you write it it has to be evergreen. So you have some vagueness that needs to be filled in. And we also need companies rising to the moment to fill in the gaps through multi-stakeholder agreements. So if you’ll indulge me, I'll talk for just a minute about what that can look like.
Crucially on the regulation front, there are some basic rules of the road that would go a long way to helping make sure that these tools are deployed responsibly.
We can think about data privacy rules, right? What are the inputs that are being gathered? How do we make sure that these tools are responsibly processing and gathering information? We can think about, in the US we use the language civil rights protections. Globally we might use human rights as the language, but what are the basic rules around how and when these tools can be used? And what is the access to remedy to people who are unfairly harmed by these tools?
We can think about basic transparency norms. David talked about the really important work that OpenAI has been doing with system cards, pioneering what it is to be transparent. That shouldn't be a voluntary commitment by a company trying to do the right thing. There should be table stakes for every company to be doing and for us to have agreed upon norms around what transparency meaningfully looks like so that people know and there's some type of common language that's being used by different companies that also works in different jurisdictions around the world.
Then, of course, we can talk about sector specific regulation for, you know, the long-term safety risks, nuclear capability. There are different things that different verticals might want to address as well.
So that's one key area that we can have meaningful progress, and the European AI Act has starting that. In the US, legislative conversations are carrying it forward to government use that I was alluding to before is another one.
But let me quickly before you before you take the mic back, talk about what that private sector involvement has to look like, too. And I think that's particularly important in a space like the World Economic Forum, where we have people thinking around what are the types of commitments we can make as a multi-stakeholder body.
So, we had the voluntary commitments that a number of companies made to the White House. We've had similar efforts in Europe to think about what that might look like. The G7 has a code of conduct that they've put out and the UN now has this advisory board.
So a lot of different places where people are trying to define what good looks like. That is a really meaningful breakthrough. It gives me hope about this year. It gives me hope about the AI conversation.
But there is a fundamental flaw in how this is working right now. Right now, companies are meeting with governments. They are thinking through what is the suite of commitments that we can pledge to undertake. Then they are writing them together and they're releasing them onto the world. It's a really good first step. That's not how you achieve meaningful accountability in the long term.
Multi-stakeholder bodies are multi-stakeholder for a reason. You need to have civil society and external third parties in those conversations as well, helping to build out the scaffolding of what responsible development and deployment looks like.
You need deadlines and timelines. You need accountability measures for how those companies are going to report their progress on what it is that they're promising to do.
And it works much better when there are outside groups that can help participate in that conversation. I think a key thing to know is that this isn't our first rodeo, right? This isn't the first time that the economy has thought about how to deal with breakthrough technology. And we can learn a lot from the fights over the social media wars and the scholarship that has come around what the field of trust and safety looks like and what meaningful, multi-stakeholder governance looks like, too.
And there we have bodies that have sprung up — they should be more empowered, but have been sprung up — to say, okay, if your company is going to do this, what is the policy? How did you develop it? Did you develop it with civil society and outside impacted communities at the table? How were you enforcing it and are you transparent in how you enforce it? Do people have visibility into what you're doing to help make that accountable? You also have things like the commitment to do human rights due diligence before you move into a new region.
There are bodies like the Global Network Initiative, where it's companies and civil society together helping the companies stay accountable to their promises and then being audited from the outside.
So I surface this again just because the legislative conversations are really important. But we know that legislating is hard and sometimes very slow, spoken as an American civil society advocate, we know legislation can be slow. These multi-stakeholder efforts are really important too, and we have to think about how to weave these together to make them meaningful in protecting people.
Ian Bremmer: I think I agree that we need to look at where we've done this before. Of course, when I think about governance around social media, I'm not enormously hopeful and what that's going to look like for AI.
Now, as a political scientist, I'll take my little narrow lens for a second. I see the disinformation issue getting worse. I see it getting worse, driven by AI. I feel it around the Middle East war and I certainly see it in terms of the coming US election for 2024.
And of all of the things that the US executive order addresses, that is not near-term. So given that, what do you think can be done both broadly speaking and then, you know, so specifically applications like watermarks, for example, I mean, like is it okay if everyone has a different one or do we need an actual single standard for how that works? I'm interested in those sorts of things.
David Robinson: So this is something we're all, from the top on down, from Sam on down, we're all thinking a lot about elections. Actually, we just had a full-time person just begin to build a team and a programme around that a year out of the US elections that are upcoming. And of course there are many elections around the world that are upcoming.
And we know that, you know, bad actors will use the least constrained tools that are available. So no matter what we put in our usage policies, we also know - and we also of course, we also open source some of our things, but open source things that don't have usage policies are going to make a lot of powerful capability available to disinformation actors. That's clearly part of what's happening.
I think there's been a really interesting shift in the watermarking conversation. So this is like encryption to know that this image came, for example, from OpenAI. And we actually, in the voluntary commitments, the wording on this was very careful, we said the thing we need is for people to end up knowing when they're looking at a AI output versus something that's, you know, from some other some more traditional source like a camera.
And there are different ways that you can do that. We can mark our stuff. We're looking at ways of doing that. We can have classifiers. So you give it a copy of something and it says, Well, did this come from us or not? We're doing that too.
But one big shift in the EO was it talked about provenance and authenticity, not just for the generated stuff but for the real stuff. So for example, what can BBC or other news outlets do to sign a photograph and say: “We are vouching that this is a real photograph?” And I think, personal forecast here, marking the reel is going to be the key that unlocks this because we know, and we will, we will mark all of our stuff or we will have classifiers, we will have provenance controls around all of OpenAO's audio visual outputs, we've committed to that, but we know there are going to be also lots of other models and lots of other generated stuff and not all of it is going to be marked.
And so we I think in the end what we really need is we need a way of knowing what there's a human vouching for and saying.
Ian Bremmer: Is there a single standard for that, do you think?
David Robinson: Not necessarily. I think there can be different contexts where the vouching happens.
But, it's also not just a matter of the standard of how the stuff gets marked or organized. As you said, the social media piece is the distribution. And so if I'm browsing on Facebook or Twitter or whatever I'm supposed to call it now, they need to be paying attention to these signals and they need to create a user experience where the end user doesn't have to be a crypto nerd in order to know, okay, what's got the right stamp on it.
And so we see this not as something where like AI companies or news agencies are going to solve this on their own. It's a multistakeholder problem.
Alexandra Reeve Givens: What David's saying is so important and really right.
OpenAI has been very thoughtful in terms of their usage policies on this, on all of the rules. But to David's point, what we really need is actually how to boost the trusted information in the online environment. And that is not a new problem. That is something that many advocates in this space have been saying for a long time.
So just to give one very specific example, because we promised to try and be tangible on this panel, my organization did a survey of election officials across the United States and found that only one in four election officials uses a .gov web domain. A lot of them are using things like, you know, Springfieldvotes.com and that's where all their election information goes — really easy to spoof. And that is just basic web hygiene right. What website are you bothering to create?
So there are simple ways. The content authenticity signalling is another one, where we can boost the trusted voices to put out that important public information. And it's why we need an ecosystem wide approach.
Ian Bremmer: Before we close. One big question for at least Lan, but maybe more broadly as well, if we have time to do it, which is in ten years time, AI continues to explode, do you think principally human beings on the planet will be interacting in one global digital space together, in two separate fragmented global digital spaces, or in many, many that are not particularly overlapping? What do you think is most likely?
Xue Lan: Well, most likely is one fragmented system.
Well, I think of the complexity of the AI governance. In the international global governance regime, we have something we called the regime complex, meaning that there are many different kind of regimes governing the same issue. But unfortunately, these regimes don’t have any hierarchical relationship, they all have some relevance to some pieces of it. And that's the situation we are in. I think this, you know, AI governance, we have many different institutions, many organizations and many regimes that's really trying to play together.
Unless I think that the US and China can find a way to compromise to get together to work with other institutions and with the UN, I think then we might address that problem.
Ian Bremmer: That is a key question and we're out of time, but a really good one to end on. Please join me in thanking an excellent panel.