The rise of generative artificial intelligence raises a lot of philosophical questions. So can philosophy help us make AI that serves humanity for the good?
On this episode we hear from 'applied ethicist' Cansu Canca, AI Ethics Lead at the Institute for Experiential AI, Northeastern University, USA; and from Sara Hooker, head of Cohere For AI, a research lab that seeks to solve complex machine learning problems.
The World Economic Forum's Centre for the Fourth Industrial Revolution: https://centres.weforum.org/centre-for-the-fourth-industrial-revolution/home
Join the World Economic Forum Podcast Club
Podcast transcript
This transcript has been generated using speech recognition software and may contain errors. Please check its accuracy against the audio.
Sara Hooker, Head of Cohere For AI: As an researcher, it's so exciting to have so many people connect with the work that you've been doing for a long time and to feel excited and feel like they understand it. Because a lot of what's changed with this technology is people feel like they're actually interacting with an algorithm.
Robin Pomeroy, host, Radio Davos: Welcome to Radio Davos and our special series on generative AI. And if you are excited - or daunted - by our new ability to use artificial intelligence tools - imagine what it’s like for people who have been developing the technology for years to see their work suddenly the centre of attention.
Sara Hooker: Our models are really powerful, particularly when you get to a certain scale. We have to understand how these models are being used once they're out in the open. And that's very hard right now.
Robin Pomeroy: On this series, we’re looking at the potential for good and for bad of AI and on this episode we hear from, yes, a computer scientist, but also from a philosopher.
Cansu Canca, AI Ethics Lead, Institute for Experiential AI, Northeastern University: In order to talk about how to optimise for fairness or how to have fair algorithms, we have to be able to define what we mean by fair. The definition of fairness, the understanding of in which context which theory of fairness is relevant, comes from the discipline of philosophy and moral and political philosophy.
Robin Pomeroy: Can we make AI ‘ethical’, or does it all depend on how we humans use these powerful tools?
Cansu Canca: Unfortunately, ethics and philosophy has been really missing in most of these discussions.
Robin Pomeroy: Radio Davos is the podcast that that looks at the biggest challenges and how we might solve them. Subscribe wherever you get your podcasts to get this special series on generative AI, or visit wef.ch/podcasts.
I’m Robin Pomeroy, podcasts editor at the World Economic Forum, and with this look at why ethics a key issue in the development, use and regulation of AI...
Cansu Canca: The stakes are really high and the questions are really interesting.
Robin Pomeroy: This is Radio Davos
Robin Pomeroy: Welcome to regular Davos and our special series on generative artificial intelligence. And I'm joined today by my colleague, Connie Kuang. Connie, how are you?
Connie Kuang, lead on metaverse and AI value creation, World Economic Forum: I'm good. How are you?
Robin Pomeroy: I'm very well, thank you. Why don't you tell us what you do at the World Economic Forum?
Connie Kuang: Sure. So I am currently the lead on our metaverse and AI value creation initiatives sitting within our Centre for the Fourth Industrial Revolution. But what that actually means is I'm investigating the economic and social implications of developing some of our frontier technologies. So unpacking the new value chains, business models or changes that might come to organisational behaviours and structures as we're adapting and implementing a new mix of technology into our day to day lives.
Robin Pomeroy: So the value proposition of AI, how it can be used, how companies can use it, how people can make products that people will want to use. That's really what you're interested in, right?
Connie Kuang: I mean, that's definitely one part of the equation for us. But as much as we look at all the opportunities, we also have to balance it with research and just having a common awareness around challenges and trade-offs and, you know, the unintended consequences that might come with the adoption of our technologies.
Robin Pomeroy: Right. And that's exactly what we're looking at in this series on generative AI. And it's based, if you've heard the previous two episodes, these are interviews I did with some of the people attending the World Economic Forum's Responsible AI Leadership Summit a few weeks ago in San Francisco.
So we have two interviews in this episode. The first one maybe goes a bit leftfield because it's not a computer scientist. It's not someone who's working at one of the big software companies. It is a philosopher. Her name is Cansu Canca. She is a AI ethics lead of the Institute for Experimental AI at Northeastern University in the USA. A philosopher, Connie. Why do you think we want a philosopher talking about this?
Connie Kuang:, It's really interesting because I think in the interview you'll hear that she actually flips the question around and says, You know, it's not so much about why should philosophers be discussing AI, but AI needs philosophers because, you know, philosophy is ultimately a debate around ethics and around fairness. And when we consider our systems, which are now starting to make decisions for us, the question of what is actually fair and what is biased is quite fascinating.
Robin Pomeroy: Yes, I mean, there's so many deep philosophical questions raised by AI. Some people will be, some people already are, talking to AI products as if they're real people. It just kind of raises some questions. We know the AI isn't conscious or isn't human, but it can fill perhaps some functions that a human could fill. That makes me feel a bit uncomfortable right from the outset. So there's a philosophical debate. There's lots of philosophical debates going on.
Connie Kuang: For sure. I think there's different streams of it. You can go with the existentialist kind of take that you've described just now. And I think Cancu's work really looks at the ethical side of it and, you know, what is the right decision that can be really, you know, a moral discussion.
Technology debates in general are not always wholly just about the mechanics of the technology itself, but it's really about us as a society working out our issues of, you know, how do these things change or continue the power structures and the bias and things that are baked into the system as we have them today? And technology is an opportunity to change that, but it's also an opportunity to exacerbate or amplify what's already there.
Robin Pomeroy: And she's a really interesting person because she's worked advising Interpol, the international police service, on ethics and also, I believe, the World Health Organisation. So she's an applied ethicist. It's really interesting, so taking it out of the classroom or the lecture theatre and into the real world here, and she must have a lot of a lot of interesting work on her hands, I think, looking at artificial intelligence.
Connie Kuang: Yes, I really enjoyed this interview.
Robin Pomeroy: Before we hear that, let's talk about the second interview of the day. Another very interesting person who is a computer scientist. Her name is Sara Hooker. She leads Cohere For AI, which is a non-profit research lab that contributes fundamental machine learning research that, in its own words, "explores the unknown". I asked her a bit about that, but she was very good at helping me in my continuing quest to do some jargon busting, starting with large language models, which we've already talked about in previous episodes. ChatGPT, the thing that most people have been experiencing recently, is a large language model, and she defines it by saying, well, it's like a small language model but bigger, which I thought was a not a nice way to start. So that kind of pitches the interview for us.
Connie Kuang: For me, I like how she breaks down these concepts into their building blocks and really helps us to understand that where we are today is an evolution point from research and work that's been going for quite some time.
Robin Pomeroy: Something very interesting for me being out there at this World Economic Forum conference was that these were people who've been working in it for years and it's something I've come to this really as a civilian like so many people.
And in fact, I noticed today in Axios, they announced this six months. I don't think it's six months to the day as we record this, but this is on Axios today, Six months in ChatGPT still mesmerizes and dismays, that's the headline in Axios.
And they're talking about the fact that it was only released to the public on November the 30th last year. They have an interesting statistic in there. While a majority of U.S. adults have heard of ChatGPT, only 14% have tried it. This is a survey by Pew conducted in March. That's still only a small fraction of the population, it says, but it means ChatGPT has been embraced more quickly in six months than either the iPhone or the web browser.
Connie Kuang: Yes, it's interesting. I think you're always going to get those that are the early adopters and the ones that are going to be more inclined to experiment and get into it. I think it's more understanding it in context to these other technologies, which I think is interesting. So the fact that, you know, more have used it that the initial iPhone, the initial internet, I think is what you mentioned. I think it speaks to the ease of access we have now to these things. Just having it so accessible and on the existing web, I think is what is quite interesting. And this is just the tip of the iceberg.
Robin Pomeroy: I think you and I are both fairly new to it, unlike most of people I interview who've been using ChatGPT or a version of something like that in development over a period of few years. What's been your experience using it?
Connie Kuang: I think a lot of what we try to understand in our work at the Forum is really about not so much the technology itself, but our relationship to it. And I think myself playing around with chat is a good example of, it's one thing to have the technology there, it's another to know what to prompt and what to use it for. And the other thing I was curious about was whether it could find me and what kind of bio it would have and describe me with. And what was really interesting was that it did not find much, which just from a simple Google search, you would see some things about myself or other individuals with my same name. I thought. That was kind of interesting.
Robin Pomeroy: And that's something Sara Hooker talks about in her interview as well as. As you'll hear in the interview, she asked ChatGPT to look for a famous athlete called Sara Hooker that doesn't exist, but it quite happily gives her a full biography of someone of that name. But there's no evidence. I don't think there is a Sara Hooker who is a famous athlete. She's just fooled it. And it's come back with a very confident... So it must've been confident, Connie, when it said, Oh, well, this is Connie and she's X, Y and Z. It must have sounded like it knew what it was talking about.
Connie Kuang: So Sara's interview actually inspired me to go into ChatGPT and ask it about myself and to give a bit of a biography. I basically asked it, Who is Connie Kuang? It said, you know, it's possible that I'm a private individual or a relatively unknown person, which in the grand scheme of things I am, and, or that there may be multiple individuals with my name, which is also very true. So I think in that sense, you know, maybe she actually gave a pretty balanced and neutral answer.
Robin Pomeroy: That's interesting. So it's starting to say, okay, maybe I don't know exactly who this person is, and there are these options. Because my experience using it has been it's been almost arrogant in a way. It's been, 'yeah, this is it'.
I've asked it to find what song are these lyrics from? And I knew the answer and asked again and again and again and gave me lists of songs. And none of them were right, but very confidently. And it could have said, 'I'm not sure, but it sounds a bit like the lyrics in this song', you know, that kind of utter confidence. Quite annoying, actually. But maybe, maybe as the weeks and months have gone on, maybe something's been changed in the algorithm or the fact that so many people are using it now.
Connie Kuang: Maybe a new stage of self-awareness, perhaps, that ChatGPT has now? Certainly a thought I had was I wasn't part of the algorithm before, but now I've inadvertently put myself and my name into its machine and its learning. So who knows what's going to happen next.
Robin Pomeroy: So what's going to happen next is we're going to listen to these two interviews. So the second one will be Sara Hooker from Cohere For AI. But first, that philosopher Cansu Canca.
Cansu Canca: I am Cansu Canca. I am the Ethics Lead of the Institute for Experiential AI at Northeastern University and I'm a research associate professor in philosophy at Northeastern University.
Robin Pomeroy: You came into this from a philosophical side rather than the technical side. Most people here at this event are engineers, or they work with engineers. You're a philosopher. What brought you into AI ethics then?
Cansu Canca: I would love to turn the question around because I think if you are doing an event on AI ethics, which is now sort of rebranded as responsible AI, for reasonable reasons, it cannot happen without ethics. And ethics is a part of philosophy. And I think, unfortunately, ethics and philosophy has been really missing in most of these discussions.
And the importance of that is that we need to, in order to talk about how to optimise for fairness or how to have fair algorithms, we have to be able to define what we mean by fair. And in the definition of fairness, the understanding of in which context which theory of fairness is relevant comes from the discipline of philosophy and moral and political philosophy.
So I don't think I have to explain why I'm here. It's more like why not more of my colleagues are not here is my question.
Robin Pomeroy: But is this something that drew you into AI because you could have studied and researched all kinds of areas of human activity. Is there something about AI that really grabs you? Is it because it's such a new field really and it's kind of a new human experience? Or was there something else.
Cansu Canca: Well, so I am an applied ethicist. I'm trained as an applied ethicist. I worked in ethics and health for over 15 years. I worked with the World Health Organisation, with schools of public health, medical schools. What I am very interested in is to figure out what is the right thing to do, right policy to implement when in circumstances where things really have high stakes, where things matter.
And the way that I transitioned into AI was about in 2016 through the health technologies, because I was a faculty member and in my university we had a lot of, in the medical school, we had a lot of health technologies that used AI. And as bioethicists, we were looking at the patient perspective, the physician perspective. We were not looking at what's going on inside the technology. And there are value judgements that are embedded in the technology. And if you don't understand them, you are not actually giving a comprehensive account. So from there I slowly moved more and more into AI in general because, going back to my main motivation, the stakes are really high and the questions are really interesting.
Robin Pomeroy: So what are the main ethical questions of AI? What are the kind of the headline ethical conundrums?
Cansu Canca: I think we have been discussing certain types of questions very often already and for good reasons, because they matter. So questions related to privacy are super important because of the systems becoming more and more closer and in tracking us in all of our interactions, for better or for worse. Questions related to fairness is huge, mainly because the question of fairness itself is huge. And when you think about why, for example, the question of fairness matters is because we are, by embedding AI systems in our daily lives into our society, what we are really doing is we are creating structures upon which we will live. And if the underlying structures are unfair, there is no hope for us to be able to create a fair society.
So privacy, we've talked about it a lot. Fairness, we are talking about a lot and we still don't have like a great structure, a framework to deal with it. We are working on it.
And the other thing I think is agency. How do we continue to have human agency, meaningful human agency, while we are interacting with AI systems that are either clearly explicitly there or they are sort of embedded all around us into the space around us.
Robin Pomeroy: Let me take the second one of those, then the fairness. People talk about bias being embedded into some of these systems. Could you explain how that happens and what could be the bad outcomes of the fact that there's bias in some of these systems?
Cansu Canca: So bias is in our systems because let's face it, as humans, we make terrible decisions. We are not necessarily morally great. So all of our existing data has our existing social discriminations and biases incorporated in it. I mean, that's what we mean when we talk about the problem of bias. There's also the mathematical way of talking about bias, which is not problematic necessarily.
So that's that is the big question, because on the one hand, this is how the data is. This is how we made decisions. So how do you get rid of certain correlations that we think are unjustified but are there? And do we want to get rid of all of them? Because sometimes they are also useful to understand.
So let me give an example. For example, if you were to think about the bias in law enforcement data. There are racial biases because of practices that have been racially discriminating. So if you have this data to decide the risk level of a given defendant, that is likely to cause a racially discriminatory result.
But on the other hand, you could also imagine the same data set to be very useful to understand what kind of help should be provided or what kind of resources should be provided to which communities. The existing biases, existing information within the data is also useful for different purposes. So cleaning it up and getting rid of it is both very difficult and also very related to the purpose of which you are using the data for.
Robin Pomeroy: And you've done that work with law enforcement. You've worked advising the United Nations and Interpol on ethics. And how do you find, when you arrive at something like that, or you come somewhere here, are people open? Are they eager to hear your advice or are they a bit wary of, Oh, there's a philosopher in the room and we are law enforcement officials or we are computer engineers. How are you received and how is your advice received?
Cansu Canca: It really depends on how the room is set up and how you enter the room. It it helps a lot. So when I engage with practitioners directly, it helps a lot if someone from the practice comes in and explains their own experience and how they had a problem with an a AI system having unethical outcomes. That really sets the stage for the practitioners to connect what comes next.
And I think that's completely understandable because we philosophers tend to be abstract and we have ourselves to blame for people to think of practitioners, to think of us as kind of irrelevant, kind of like, Oh, when I have time, I'll think about these issues.
But what really is, is that a lot of the decisions that go into policymaking, that go into day to day decision making while we are developing AI systems, while we are using AI systems, have ethical decisions embedded in them, whether we do them implicitly or explicitly.
If it is implicit, if it is not thought through, chances are it's a bad decision or we don't know what that decision was so we cannot be consistent in our decision making. Whereas our job as philosophers is to really analyse these decisions and make sure that they are done well as well as consistent and coherently. And we have to show to the practitioners that we can do this in a way that is efficient, that does not create a roadblock for their innovation or for their deployment of the technology. Of course, that doesn't mean that it's going to be absolutely zero cost, but there is a way that we can collaborate and make sure that we are going in the same direction in a bit with the expected speed, I would say.
Robin Pomeroy: And how can generative AI become ethical because it's not conscious. It doesn't know whether it's good or bad, but it can do good or bad things. You've spoken to a lot of people here, and I'm sure before in your work, who are experts, they know how, in theory, they know how this thing works. Have you come to some kind of conclusion of where in the system can you inject the ethics? How can you make something that's not human and that's not programmed to do a very simple single task? How can you make it ethical? It seems almost impossible.
Cansu Canca: Yes, I think that's the right answer. I think. No, I don't have an answer. I don't think we came in all the breakout sessions that I was in. I don't think we could manage to find a great solution, because I think that the truth is that it's just way too complex to figure out how to do this.
And to your question, can generative AI or AI be ethical? There is this like conceptual, the conceptual issue within that question as well that you pointed out, if it is not acting with intention, if it is not an agent, can you even say that it is ethical? Like is this even the right word to use?
At the very beginning I said that, well, it's AI ethics, but we are sort of rebranding it as responsible AI for good reasons. Well, that's one of the reasons, because I think ethical AI gives the impression as is if an AI system has the agency and has the ability to act ethically, whereas at least now we are not there yet, maybe at some point there will be an AI system that has that moral agency. But current AI systems do not have moral agency so they are not agents that can act ethically or unethically. And they also don't have more status. They are not subject to unethical behaviour themselves.
But that doesn't mean that there is still a lot to be done and understood to make sure that: one, how do we use these systems? Whatever is the system, how do we use them so that the way that we use them, the way that we put them into the world, results and minimises unethical outcomes? And the other thing is how can we make sure that the system itself is acting in ways and it has properties in such a way that it is not geared towards unethical outcomes, harmful outcomes.
And both of these are extremely difficult with generative AI because of the complexity of the AI system as well as its amazing huge ways of usability.
Robin Pomeroy: But what about the kind of policy angle? Is it possible, do you think, to set rules, from a government or some kind of regulatory agency? Or is it more a matter of each company needs to work on how it approaches those things itself?
Cansu Canca: Both. Because so it's definitely possible to set rules to create some sort of boundaries. And I think of this sort of like, not think of generative as a wholesome but like sort of like a project with a divide and conquer type of understanding. So what in the development aspect, what capabilities do we definitely not want the system to have? So that's one set of questions that we need to answer, both from the regulatory perspective and from the ethics perspective.
The other aspect of it, how should professional agencies, institutions like law enforcement, DoD, or the medical community should be using these systems? Under what conditions? In which ways? You can have rules and guidelines around that as well.
And then the directly customer facing approach. So when companies put these AI systems into the world for customers to directly engage with them, what should be their structure? What kind of disclaimers should be there? There's also all the user interface aspect because we are not engaging with AI directly, we are engaging with the user interface. So how should we structure that, design that, so that the individuals can engage, we can engage with it in a reasonable, rational manner, understanding its limitations, understanding its opportunities.
In all of these, there is room for regulation, and as we know from all the other aspects of regulation and in life, there is always, there will always be areas that are grey because regulation cannot be micromanaging and all those questions will have to happen, will have to be resolved within companies, within the developing and deploying organisations with the help of ethical decision making.
Again, you can think of this, since I come from medicine, you can think of this as we have all the medical law, we have a health policy, but we also have medical ethics because day to day decisions you don't have the answers in medical law. You can using the ethics, you can make the best case possible and you can stay within the boundaries that strong in the medical law and health law, for example.
Robin Pomeroy: Has anything happened that surprised you when you've seen the outcomes as generative is becoming more used? Have you seen or heard of some things that are that is a breach of ethics, that an awful thing that's happened as anything I think about, because a lot of journalists have tried to break the system. They said, be evil, pretend you're the devil or, you know, tell me how to murder someone. I mean, they've done that presumably as a joke or they're testing the thing. But has any of that surprised you? Or is that just run of the mill? That's exactly what you'd expect people to do.
Cansu Canca: It did not surprise me. What surprised me was - and I think it is a good way, I think it's great that people are testing it and trying all these different ways - what surprised me is the reaction that they had when the system acted in ways that are not great, because that's what we want to see, right? You want to test the system, understand where it fails, but you should expect that it will fail.
I mean, it's interesting to see that they are surprised because which seems to suggest that they have expected a perfect structure, which well, I'm not even sure, because there are so many ways that it's failed. Saying that I'm sentient. I'm not sure why it's surprising. I mean, the system can say that and it's interesting to discover that. But like, is this newsworthy?
Robin Pomeroy: Do you have any advice to normal people in terms of ethics? Do you think we need some kind of ethical guidelines when we're using this technology?
Cansu Canca: That's a good question. I wouldn't say ethical guidelines. I would say I think mo re like understanding what... as a consumer, I think you're just being bombarded by all these different ways that is characterised. It's difficult for someone who is not in the area to really understand what is the capabilities, what are the risks, what are the limitations.
So I would say it would be great if we could tell the public more clearly what is it that we are dealing with. And the risks are, to be clear, huge, but they are usually not the risks that the public is imagining or the newspapers are making them believe. We are not, it's not a Terminator situation, but the fairness question kills people. You know, like if you are not able to get healthcare because your risk is judged as much less than another person, wrongfully, that kills you.
So the fairness question is not just an abstract question. Same with, you know, this again within the fairness question, what your chances of going to jail or not being able to get out of jail if the criminal justice system keeps using faulty risk assessment tools. These are serious questions, very serious questions, but they are not Terminator and RoboCop stories. So I think we need to understand where the risk really lies so that the public can demand the right regulations, demand more from the companies.
One question that always comes is like, how do you motivate the companies to be ethical? And there's of course, that's that's a hard question because even as humans, it's very difficult to motivate humans to be ethical, forget about the whole organisation. And we rely on public reaction. We rely on public's understanding of what is not ok. So it's important for those in the area of engaging with the public to portray the risks clearly and not creating these doomsday scenarios which really have nothing to do in most cases with the real risks that we are dealing with.
Robin Pomeroy: So the risk might not be as dramatic as killer robots, but it is potentially as dangerous, but in a more boring way.
Cansu Canca: Exactly, exactly what you said. It's not the humanoid that's coming after you that's going to kill you. But it is it a very mundane risk assessment system that's just going to kill you if you don't fix this.
Robin Pomeroy: You are listening to Radio Davos and our special series on generative artificial intelligence. That was Cansu Canca, AI Ethics Lead at the Institute for Experiential AI, Northeastern University. Our next guest is computer scientist Sara Hooker.
Sara Hooker: Sara Hooker. I lead Cohere For AI. So it's a non-profit research lab that contributes fundamental machine learning research that explores the unknown. We have a full time research staff, but we also have an open science initiative, which is cross institutional and pairs a lot of researchers all over the world with compute and access to resources to develop the next generation of life language models.
Robin Pomeroy: I'm glad you've mentioned large language models. I wonder if you're able, in a nutshell, just to explain simply what is a large language model.
Sara Hooker: A large language model is a scaled small language model. So maybe we can start there.
I think that what we've seen over the last ten years and the breakthrough in language modelling has been two things. One is you pre-train in an unsupervised way on a large amount of data, typically the internet. And then what you do is you fine tune on your specific downstream tasks. So a large language model is just a surprisingly simple formula that we as researchers are sometimes very grumpy about, which is it turns out if you add a lot more parameters or you increase the size of the model, it tends to do a lot better. This is something which is almost painfully too simple. So a lot of what we're doing now is figuring out how do we make these smaller. How do we make these models much more efficient and accessible to people who want to use them.
Robin Pomeroy: And an example of that would ChatGPT. It's a large language model. For someone who is not involved in it seems unfeasible that just by loading up with examples of language, with words and sentences on a massive scale, it can then talk with you and generate original content, just because you've loaded that, you've trained it, with all this data. I mean, how is that possible? Did that ever surprise anyone? Or does that, from your point of view, look totally normal?
Sara Hooker: So it's interesting because what you see now and what you're excited about and I sense engaging with is actually the culmination of a few different separate steps. So to researchers it has been kind of a slow build, but it's connected very viscerally with with people have connected with chat technology in general.
So whether that's ChatGPT or other chat technology that is available and it feels very fluid and I think what you're describing is it feels very conversational style.
What has happened is that if you just did the pre-training, so if you just train on the internet in this unsupervised way, you wouldn't get those type of characteristics. What has been interesting about this shift is people care more about the data that you fine tune with. So that's the data that you layer on at the very end and is typically structured as a question and answer. And spending a lot of time in structuring those type of what we call annotations in the research field, that's what's giving it the magic that you're feeling when you engage with it.
And actually what's most interesting is it doesn't take much data, it just takes a lot of care. So for the last ten years, I've been very grumpy that my research colleagues don't care enough about data, and now everyone cares about data again, because so much the focus is in how do we get good examples of the type of behaviour that you're really enjoying as you engage in the model and how do we make sure the model learns that? That's called alignment, or often it's called reinforcement learning human feedback. So that's a technique that's being used there.
Robin Pomeroy: What do you mean by annotation? Could you define that.
Sara Hooker: Yes. So an annotation would be, let's say that you have an example. Like for example, 'what is the biography of Sara Hooker?' An annotation would be before you train the model, you actually write down the biography, and that's an example that you give to the model of how it should respond to that question. So that's a good example of an annotation.
Robin Pomeroy: I looked up my biography and it was largely wrong. I've got enough stuff on the internet, I've been a journalist for years, but still got a lot of stuff wrong and it said it so confidently. Why does that happen?
Sara Hooker: Wow. What did it say, out of curiosity? Who are you?
Robin Pomeroy: Mine wasn't too bad. I have a colleague who is not married and doesn't have children. And why would she ever put that out online anyway? She's also a journalist. And it said she was married for ten years and had 10 kids.
Sara Hooker: Ooh, the alternative universe!
Robin Pomeroy: Exactly. Sliding Doors.
Sara Hooker: Yes, well, this happens a lot. So for example, you can look up, try looking up, what is the bio of Sara Hooker, the ice skater. And you will get a very illustrious, perhaps biography of me and, you know, my various championships, even though I do not ice skate. I'm not you know, I'm fairly unathletic.
But what's happening, it's what we call hallucinations. So if you ask a model, these models don't have the ability to abstain from response when they don't know. So this is an active open research question: is how do we tell models when they're not sure to not answer, but also how do we factually ground?
So remember as well that there's almost two problems. So while there might be information about you online, what if you change jobs? What if tomorrow you become a documentary filmmaker? Our modela and kind of trained on a snapshot of time. So they may be trained two years ago. How do we make sure that it reflects current knowledge? And so this is another really interesting active research problem that a lot of people are currently focused on because it quizzes a lot of what I would say sharp clefts in model performance where the behaviour is highly brittle.
Robin Pomeroy: Is there an ice skater called Sara Hooker?
Sara Hooker: No. So this was truly just ordained with various trophies. I was at the championship the worldwide championship of ice skating. I did apparently win. So yeah, there's a perhaps an alternate career for me in the future.
Robin Pomeroy: Tell us a little about your work. You lead Cohere For AI, and I'm reading from your website here. 'We support fundamental machine learning research that explores the unknown'. What do you mean by the unknown?
Sara Hooker: Well, research is fundamentally about pushing forward the frontier of ideas. So often it's about spotting connections between ideas and combining things in different ways. A lot of what we work on day to day is not this generation models, but it's what comes next. So how do we develop the next generation of how to model and represent the world?
Robin Pomeroy: Where do you see opinions on how to regulate or govern AI? Where do you see it going? What needs to happen, do you think?
Sara Hooker: I think there's two sides to this. I think regulation needs to happen. I actually think I'm more of a realist. I do think, as someone who's worked on this technology for a long time, there's no denying that it's a stepwise shift in the power of the models that we have.
Actually, as a researcher, I worry a lot about this because in some ways it's so exciting to have so many people connect with the work that you've been doing for a long time and to feel excited and feel like they understand it. Because I think a lot of what's changed with this technology is people feel like they're actually interaction with an algorithm. But it's also in some, it causes concern I think for a lot of researchers that your ideas, which typically are still research ideas, are being adopted by millions of people around the world and are being used in very different ways.
Our models are really powerful, particularly when you get to a certain scale.
So when you talk about large language models, we have to understand how these models are being used once they're out in the open. And that's very hard right now. We don't have traceability. So there's things like that.
I also think a lot about access to resources, who builds the technology. So a lot of what Cohere For AI does is we try and pair industry lab resources like compute with researchers around the world who can't otherwise audit or participate in the technology.
This is a huge issue right now because remember, when we go large, we need a lot more compute. So that is both costly and prohibitive for many researchers to actually engage, and that prevents things like transparency and auditing.
So I think this is also another important area. You know, the work that we do providing compute is really a Band-Aid. We need national policies around how do national policies do five, ten years plans around compute as well as international cooperation around this.
Robin Pomeroy: Can I fire a couple of technical terms at you, get you to define them? One of them you mentioned just now. Auditing. What does that mean in this sense?
Sara Hooker: So auditing is giving transparency into your model behaviour and it's verifying model behaviour both expected and perhaps understanding if it's unexpected. But I think perhaps more interesting is what are the challenges around auditing. So one is that a lot of auditing of training data, the scale is now massive. So how do you have easy scalable techniques for surfacing parts of the data distribution you might be concerned about?
The other I mentioned is the computational access. So how do we equip auditors with the compute that they need to actually audit these models?
And so it's actually a very fascinating time, but it's a core benchmark of how do we deploy safely. We need ways to actually verify that the behaviour is what we expect and that these models able to perform in a robust way when they encounter new data in the real world.
A lot of what we, an interesting term that maybe I'll tack on there is that we now have this term 'red teaming', which is very particular to generative models. And it's where you ask groups of citizens, users, technologists, engineers, to really try and maximise the undesirable behaviour. And you're trying to see how the model performs under this type of stress testing. And that's also a really important stage now to model development.
Robin Pomeroy: How about responsible release?
Sara Hooker: Responsible release is actually a term that's up for definition. I think we're going through this interesting stage where really that notion of responsible release - there's no common standard. I think this is an area we need more work on.
Typically, the idea of responsible release is that, frankly, I think we should both have an understanding of model behaviour, but a way to also figure out when we release it is how is it being used downstream?
Because remember, the large language model is only the first part. You can use large language models, people can train it on additional data. And this is the conundrum. Can we, should we be releasing under certain licenses that try and prevent the worst types of behaviour? Should we in fact have held out auditing before we release?
So I think this is important because we don't have good traceability for these models right now. So once they're in the open, they can be used in a very variety of ways and there's not a good way to trace back what model was used. And so I would say it's very much in flux what do we mean by that.
Robin Pomeroy: So it's about where the company, when developers release a product into the market, how they do in such a way that it's beneficial and not harmful.
Sara Hooker: That's the end goal. But the mechanism, I would say, has to change because the technology has changed.
So the end goal is that we want to release this in a responsible way and we want to make sure that our models are being used... because models are tools, right? And so a lot of it is with responsible release we want to make sure that the tool is being used as expected. What's changing is that now we have new technology we have to revisit how do we ensure that.
Robin Pomeroy: My next one is something so what you just said, which was 'tracing' and also, if it's part of the same thing, 'watermarking'. What what do you mean by either of those things?
Sara Hooker: Yes. So watermarking is very specific. Watermarking is actually this, I would call it frankly an open research problem. It's like how do you detect whether generated text is from a large language model or it's from a human writing text.
Why I call it an open research problem is that watermarking in some ways is very good when the text isn't altered at all but there's already been ways to show that once you vary the text that you get from the large language model, it's harder to tell.
In general, this is a very important research problem because really what it's asking is can we trace what is generated by a model versus a human? And why that's important is that it's good for verification of trust. And it's also important in terms of understanding where models may be misused.
Traceability is this wider effort of we might not just want to trace what is the output of the models, but the models themselves. Like, if we see that model weights have been used in unexpected way, can we trace them back to the original model? Is there a unique signature and both are important for safety and for making sure models are being used as expected.
Robin Pomeroy: Thank you for those definitions.
Sara Hooker: Happy to!
Robin Pomeroy: Going back to your panel yesterday, someone asked the question, is it the models that need to be regulated or the users? Do you understand that question and that concept?
Sara Hooker: Someone brought that up yesterday. It's an interesting perspective. I believe the way it was framed, the intent of the statement, was really this question of where is the ease of verify misuse? So if you do it at the user, it's almost asking at the final port of call. So it's almost like if you observe misinformation on Twitter, is it easier to detect it there than to trace it through the whole chain of what led to that misinformation?
Frankly, from the policy perspective, I think it is easier from an implementation perspective because then you're almost asking at the last step did this cause harm.
To attribute back, it's challenging because attribution in general for this type of technology is is challenging. And then you would have to decide how what fraction did this particular step in the process contribute.
So I suspect at least the first framework is going to be centred on that, which is really using tools that already are deployed for safety in terms of how people interact with certain sites and figuring out is there something that is an abuse of the terms of service.
And part of what what I suspect will happen is just our tools need to catch up and make sure that where we're catching things that really mirror this evolution in technology .
Robin Pomeroy: Maybe someone would say that's just too little, too late. If you're developing technology that can teach itself, that has so many unexpected outcomes from, you know, the black box outcomes, it might be too late by the time you get to that end point and that actually you need to regulate the foundational model that is that is allowing that to happen. If you're kind of opening Pandora's box and unleashing things that you're not quite sure what the result of that will be.
Sara Hooker: I definitely think there is a question of how, the threshold for use of models. So right now this goes back to that debate of, you know, you have these two extremes. So you have completely open models, the weights are on the internet, you can go download them today. You have APIs. So you have to use an API. You have to give over some information, so you can kind of throttle bad use even at the API term.
There's this question of what's in the middle. So if you do want to open source for goals of accessibility, how do you make sure that that's not, that someone doesn't download the weights tomorrow and do something unintentional?
And I think that really gets back to this question of licenses. But also, will licenses be respected? Because we've already had some interesting recent, frankly, kind of use of licenses within the research community that suggests that research-only licenses are not really respected, that there was automatic torrenting of certain models and kind of a release. So I think this is an active question. I think it gets back to this idea of traceability.
Doesn't have to be all or nothing. There can be certain protocols at each stage and thinking about what the framework should be. And I personally as a researcher, I'm very much in favour of just us having richer, more precise conversations about this because some of this almost amounts to what is feasible, what can we actually standardise as best practices, as well as making sure that there's researchers in the room, as well as policymakers, as well as users, and thinking about the implications for each of those groups.
Robin Pomeroy: Sara Hooker, the head of Cohere For AI. You also heard in this episode Cansu Canca, a philosophy professor from Northeastern University in the USA,
Please subscribe to Radio Davos wherever you get your podcasts and please leave us a rating or review. And join the conversation on the World Economic Forum Podcast club -- look for that on Facebook.
This episode of Radio Davos was presented by me, Robin Pomeroy, with Connie Kuang . Studio production was by Gareth Nolan.
We will be back with more on generative AI next week, but for now thanks to you for listening and goodbye.
Filipe Beato and Jamie Saunders
November 21, 2024