Emerging Technologies

What is AI? Top computer scientist Stuart Russell explains in this video interview

Anna Bruce-Lockhart
Editorial Lead, World Economic Forum

Listen to the article

  • Stuart Russell, Professor of Computer Science at University of California, Berkeley, and author of Human Compatible, is one of the world’s most respected experts on artificial intelligence.
  • In this video interview, he assesses the current state of AI, its perceived threat to our lives and the power of algorithms on social media and beyond.
  • The video interview is part of the World Economic Forum’s newly launched Experts Explain series, in which leading voices from economics, science and social psychology share their biggest ideas.

“If technology could make a twin of every person on Earth and the twin was more cheerful and less hungover and willing to work for nothing – how many of us would still have our jobs?”

These words from Berkeley professor and computer scientist Stuart Russell cut to the heart of our apprehension around artificial intelligence. Just how much of what we do can AI learn to do better than us? And how safe will we humans feel once it does?

In this in-depth video interview, Russell addresses our collective unease around a technological dystopia, exploring the function and scope of ‘general purpose’ AI, from the humble domestic thermostat to the attention-hungry bleeps and pings that keep us glued to our smartphones.

“If you nudge somebody hundreds of times a day for days on end, you can move them a long way in terms of their beliefs, their preferences, their opinions,” says Russell.

“Algorithms are having a massive effect on billions of people in the world. I think we’ve given them a free pass for far too long.

Watch a short summary of the interview here:

Prefer audio? In addition to the video, you can also enjoy a podcast discussion between Stuart Russell, our podcast editor Robin Pomeroy and Kay Firth-Butterfield, Head of Artificial Intelligence and Machine Learning at the World Economic Forum.

Loading...

Here's the full transcript of the interview.

What is AI?

Stuart Russell: It's actually surprisingly difficult to draw a hard and fast line and say, well, this this piece of software is A.I. and that piece of software isn't A.I.

Within the field, when we think about, well, the object that we discuss, something we call an agent, right, which means something that acts on the basis of whatever it has perceived and the perceptions could be through a camera or through a keyboard. The actions could be displaying things on a screen or turning the steering wheel of a self-driving car or firing a shell from a tank or whatever it might be.

And the goal of A.I. is to make sure that the actions that come out are actually the right ones, meaning the ones that will actually achieve the objectives that we've set for the agent. And this maps onto a concept that's been around for a long time in economics and philosophy, called the rational agent. So the agent whose actions can be expected to achieve its objectives. And so that's what we try to do.

And they can be very, very simple. A thermostat is an agent. It has perception, just measures the temperature. It has action, switch on or off the heater. And it sort of has two very, very simple rules. If it's too hot, turn it off. If it's too cold, turn it on.

And you know, is that A.I.? Well, actually, it doesn't really matter whether you want to call that AI or not. So there's no hard and fast dividing line. Well, if you've got 17 rules and it's A.I., if it's only got 16, then it's not right. That wouldn't. I wouldn't make sense. So we just think of it as as a continuum from extremely simple agents to extremely complex agents like humans.

What is ‘general purpose’ AI?

This is always been the goal, what I call general purpose AI, there are other names for it human level A.I., superintelligent AI, artificial general intelligence. But I settled on general purpose AI because it's a little bit less threatening and superintelligent A.I.

And as you say, it means A.I. systems that for any task that human beings can do with their intellects, the A.I. system will be able to. If not, you had already to very quickly learn how to do it and do it as well as or better than humans.

I think most experts say by the end of the century, we're very, very likely to have. General got to say I the median is something around 20, 45. And so that's not so long, you know, it's less than 30 years from now. I'm a little more on the conservative side. I think the problem is harder than we think.

“If technology could make a twin of every person on Earth and the twin was more cheerful and less hungover and willing to work for nothing – how many of us would still have our jobs?”

Stuart Russell

Will AI take our jobs?

This is a very old point. I mean, even amazingly, Aristotle actually has a passage where he he says, look, if we had fully automated weaving machines and fully automated plectrum that could pluck the lyre and produce music without any humans, then we wouldn't need any workers. And you know, it's a pretty amazing thing for 350 B.C. So that that idea, which I think it was Keynes who called it technological unemployment in 1930 is is very obvious to people, right? They think, Yeah, of course, if the machine does the work, then I'm going to be unemployed and the, you know, the Luddites worried about that.

And for a long time, economists actually thought that they had a mathematical proof that technological unemployment was impossible. But, you know, if you think about it, right…

If technology could make a twin of every person on Earth and the twin was more cheerful and less hungover, and willing to work for nothing? Right? Well, how many of us would still have our jobs? I think the answer is zero.

So there's something wrong with the economist mathematical theorem. And over the last decade or so, I think opinion in economics is really shifted. And it was in fact, the first Davos meeting that I ever went to in 2015. There was a dinner supposedly to discuss the new digital economy, but the economists who got up, you know, there were several Nobel Prize winners there, other very distinguished economists, and they sort of got up one by one and said, You know, actually, I don't want to talk about the digital economy. I want to talk about A.I. and technological unemployment. This is the biggest problem we face in the world, at least from the economic point of view.

Will AI mean total unemployment?

I think there's still a view of many economists that because there are there are compensating effects, but it's not as simple as saying if the machine does Job X, then the person isn't doing Job X and so the person is unemployed, right? There are these compensating effects.

So if the machine is doing something more cheaply and more efficiently, more productively, then that increases total wealth, which then increases demand for all the other jobs in the economy. And so you get this sort of recycling of labor from areas that are becoming automated to areas that are still not automated.

But if you automate everything, then this is the argument about the twins, right? It's like making a twin of everyone who's willing to work for nothing.

And so you have to think, well, other areas where we aren't going to be automating either because we don't want to or because humans are just intrinsically better. So this is one, I think, optimistic view, and I think you could argue that Keynes had this view. You know, he called it perfecting the art of life, right? Will be faced with man's permanent problem, which is how to live agreeably and wisely and well, right. And those people who cultivate better the art of life will be much more successful in this future. And so cultivating the art of life is something that humans understand. We understand what life is, and we can do that for each other because we are so similar.

So there's this intrinsic advantage that we have for knowing what's like, knowing what it's like to be jilted by the love of your life, knowing what it's like to lose a parent, knowing what it's like to, you know, to come bottom in your class, at school and so on. So we have this extra comparative advantage over machines. That means that those those kinds of professions, the interpersonal professions are likely to be ones that humans will have a real advantage to will actually more and more people, I think, will be moving into that area.

Can we ask AI to solve our hardest problems?

There's a big difference between asking a human to do something and asking and giving that as the objective to an AI system.

When you ask a human to fetch you a cup of coffee, you don't mean this should be their life's mission, and nothing else in the universe matters. Even if they have to kill everybody else in Starbucks to get you the coffee before it closes. They should do that. No, that's not what you mean, right?

You mean. And of course, all the other things that we we mutually care about, you know, they should factor into your behavior as well. And the problem with the way we build A.I. systems now is we we give them a fixed objective, right? We the algorithms require us to specify everything in the objective.

And if you say, you know, can we fix the acidification of the oceans? Yeah, you could have a catalytic reaction that does that extremely efficiently, but consumes a quarter of the oxygen in the atmosphere, which would apparently cause us to die fairly slowly and unpleasantly over over the course of several hours. So. How do we avoid this problem, right, you might say, OK, well, just be more careful about specifying the objective, right? Don't. Don't forget the atmospheric oxygen. But you know, and then of course, do. It might produce some side effect of the reaction in the ocean poisons all the fish. OK, well, I meant don't. Yeah, don't kill the fish, either. And then, well, what about the seaweed? OK, well, don't don't do anything that's going to cause all the seaweed to die and on and on and on.

And in my book Human Compatible that Kay mentioned, the sort of main point is if we build systems that know that they don't know what the objective is, then they start to exhibit these behaviors, like asking permission before getting rid of all the oxygen in the atmosphere. Right. And they do that because that's a change to the world and the algorithm may not know, is that something we prefer or disprefer? And so it has an incentive to ask because it wants to avoid doing anything that's just preferred. So you get much more robust, controllable behaviour. And in the extreme case, if we want to switch the machine off, it actually wants to be switched off because it wants to avoid doing whatever it is that is upsetting us. It wants to avoid it. It doesn't know which which thing it's doing is upsetting us, but it wants to avoid that. So it wants us to switch it off if that's what we want.

So in all these senses, control over the AI system comes from the machine's uncertainty about what the true objective is. And it's it's when you build machines that believe with certainty that they have the objective, right? That's when you get the sort of psychopathic behavior, and I think we see the same thing in humans.

How worried should we be about AI?

You know, AI is a technology, it isn't intrinsically good or evil. That decision is up to us where we can use it well or we can misuse it.

There are risks from poorly designed A.I. systems, particularly ones pursuing the wrong object, wrongly specified objectives. And I actually think we've given algorithms in general, not just a AI systems, but algorithms in general. I think we've given them a free pass for far too long.

And if you think back there was a time when we gave pharmaceuticals a free pass, there was no FDA or other agency regulating medicines, and hundreds of thousands of people were killed and injured by poorly formulated medicines, by fake medicines, you name it. And eventually, over about a century, we developed a regulatory system for medicines that you know, it's expensive. But most people think it's a good thing that we have it and we are nowhere close to having anything like that for algorithms, even though. You can do it,

but perhaps to a greater extent than medicines, these algorithms are having a massive effect on billions of people in the world. And I don't think it's reasonable to assume that it's necessarily going to be a good effect. And I think government governments now are waking up to this and really struggling to figure out how to regulate and while not actually making a mess of things.

Are AI algorithms destroying society?

The problem with answering your question is that we actually don't know the answer because the facts are hidden away in the vaults of the social media companies. And those facts are basically. Trillions of events per week, trillions. Because we have billions of people engaging with social media hundreds of times a day and every one of those engagements clicking, swiping, dismissing, liking, disliking thumbs up, thumbs down. You name it. All right. All of that data is inaccessible.

However, if you think about the way the algorithms, what they're trying to do is basically maximize click through right. They want you to click on things, engage with content or spend time on the platform, which is a slightly different metric, but basically the same thing.

And you might say, Well, OK. The only way to get people to click on things is to send them things they're interested in. So what's wrong with that? But that's not the answer. That's not the way you maximize clickthrough. The way you maximize clickthrough is actually to send people a chain of content that turns them into somebody else who is more susceptible to clicking on whatever content you're going to be able to send them in future. So the algorithms have - at least according to the mathematical models that we've built - the algorithms have learned to manipulate people, to change them, so that in future they're more susceptible and they can be monetized at a higher rate.

Now, at the same time, of course, there's a massive human driven industry that sprung up to feed this whole process the click bait industry, the disinformation industry. So people have hijacked the ability of the algorithms to very rapidly change people because it's hundreds of interactions a day, everyone has a little nudge.

But if you nudge somebody hundreds of times a day for days on end, you can move them a long way in terms of their beliefs, their preferences, their opinions.

The algorithms don't care what opinions you have, they just care that you're susceptible to stuff that they send. But of course, people do care, and they hijacked the process to take advantage of it and and create the polarization that suits them for their purposes. And, you know, I think it's essential that we actually get more visibility. A.I. researchers want it because we want to understand this and see if we can actually fix it. Governments want this because they're really afraid that their whole social structure is disintegrating or that they're being undermined by other countries who don't have their best interests at heart.

Is it possible to fix social media’s AI problem?

With social media, this is probably the hardest problem, because it's not just that it's doing things we don't like. It's actually changing our preferences and that's that's a sort of a failure mode, if you like, of of any A.I. system that's trying to satisfy human preferences, which sounds like a very reasonable thing to do. Right. One way to satisfy them is to change them so that they're already satisfied, right? You know, and I think politicians are pretty good at doing this right. And we don't want A.I. systems doing that.

But it's sort of the the wicked problem because it's not as if all the users of social media hate themselves. Right, they don't they're not sitting there saying, how dare you turn me into this raving neo fascist, right? They believe that they need their newfound neo fascism is actually the right thing, and they were just deluded beforehand. And so it gets to some of the most difficult current problems in moral philosophy. How do you act on behalf of someone whose preferences are changing over time? Right? Do you do you act on behalf of the present person or the future person? Which one? And there isn't a good answer to that question. And I think it points to actually gaps and gaps in our understanding of moral philosophy. So in that sense, what's happening in social media is really difficult to unravel.

But I think one one of the things that I would recommend is simply a change in mindset. In the in the social media platforms, right, rather than thinking, OK, how can we generate revenue? Think, what do users care about? What do they want the future to be like? What do they want themselves to be like? And if we don't know? And I think the answer is we don't know.

I mean, we got billions of users a little different, they will have different preferences. We don't know what those are. Think about ways of having systems that are initially very uncertain about. The true preferences of the user. And try to learn more about those, but while sort of respecting them, so the most difficult part is you can't say don't touch the user's preferences. Right, under no circumstances are you allowed to change the user's preferences. Because just, you know, reading the Financial Times changes your preferences, right? You, you become more informed. You learn about all sorts of different points of view and then you're a different person.

And we want people to be different people over time. We don't want to remain newborn babies forever, but we don't have a good way of saying, well, this process of changing a person into a new person is good, right?

We think of, you know, university education is good or, you know, global travel is good. You know, those usually make people better people, whereas, you know, brainwashing is bad and, you know, joining a cult, you know, what cults do to people is bad and so on. But you know. What's going on in social media is right at the place where we don't know how to answer these questions. So we really need some help from moral philosophers and other thinkers.

Have you read?

Which principles do we need to stick to build a better future with General Purpose AI?

The three principles, the first one is that the the only objective for all machines is the satisfaction of human preferences and preferences is actually a term from economics. It doesn't just mean, well, what kind of pizza do you like or who did you vote for? It really means what is your ranking over all possible futures for everything that matters?

So it's a very, very big, complicated, abstract thing, most of which you would never be able to explicate even if you tried, and some of which you literally don't know because I, you know, I literally don't know whether I am going to like durian fruit if I eat. Some people absolutely love it, and some people find it absolutely disgusting. I don't know which kind of person I am, so I literally can't tell you. Do I like the future where I'm eating durian every day? So. So that's the first principle, right? We want the machines to be satisfying human preferences.

Second principle is that the machine does not know what those preferences are, right? So it has initial uncertainty about human preferences. And we already talked about the fact that this the sort of humility, is what enables us to retain control. It makes the machines in some sense deferential to human beings.

The third principle really just grounds what we mean by preferences in the first two principles, and it says that human behavior is the source of evidence for human preferences. And you know, that can be unpacked a bit. And basically, you know, the model is that humans have these preferences about the future and that. Those preferences are what caused us to make the choices that we make. Right, and behavior means, you know, everything we do, everything we don't do speaking, not speaking, sitting, you know, reading your email while you're watching this lecture or this this interview.

And so the upside potential for A.I. is enormous, right? And going back to Keynes. Yes, it really could enable us to live wisely and agree if we get wealth free from the struggle for existence that's characterized the whole of human history.

You know, up to now, we haven't had a choice, you know, we have to get out of bed, you know, otherwise we'll die of starvation. And in the future, we will have a choice. I hope that we don't just choose to stay in bed, but we will have other reasons to get out of bed so that we can actually live rich, interesting, fulfilling lives.

And that was something that Keynes thought about and predicted and looked forward to, but isn't going to happen automatically.

There's all kinds of dystopian outcomes, even when this golden age comes, is that whatever the movies tell you, machines becoming conscious of deciding that they hate humans of wanting to kill us is not really on the cards.

Watch the full series of Experts Explain here.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Emerging Technologies

Related topics:
Emerging TechnologiesForum Institutional
Share:
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.

Subscribe today

Here’s why it’s important to build long-term cryptographic resilience

Michele Mosca and Donna Dodson

December 20, 2024

How digital platforms and AI are empowering individual investors

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum