By 2027, businesses predict that almost half (44%) of workers’ core skills will be disrupted. From AI tutors to lifelong learning schemes, what approaches and opportunities hold the greatest potential to close gaps and prepare people for tomorrow’s economy?
This session has been developed in collaboration with CNBC.
This is the full audio from a session at the Annual Meeting of the Global Future Councils 2024 in Dubai on 17 Oct, 2024. Watch it here: https://www.weforum.org/meetings/annual-meeting-of-the-global-future-councils-2024/sessions/skills-in-the-age-of-ai/
Nela Richardson, Chief Economist and Environmental, Social and Governance (ESG) Officer, ADP
Dan Murphy, Anchor and Correspondent, CNBC
Stuart Russell, Professor of Computer Science, University of California, Berkeley
Jo O'Driscoll-Kearney, Chief Learning Officer, Majid Al Futtaim Holding
Abdallah Abu Sheikh, Founder and Chief Executive Officer, Astra Tech
Check out all our podcasts on wef.ch/podcasts:
Podcast transcript
This transcript has been generated using speech recognition software and may contain errors. Please check its accuracy against the audio.
Dan Murphy: Well, ladies and gentlemen, a very warm welcome. First, I wanted to say welcome to all of you in the room. Thank you so much for being here today.
I'm Dan Murphy from CNBC and I also wanted to say thank you so much for all of us joining, all of you joining us here on the livestream as well.
So, your next conversation is called "Skills in the Age of AI." And we're going to dive right into this with an expert panel joining me up the front.
I'm with Abdallah Abu Sheikh. He's founder and chief executive officer of Astra Tech. Abdallah, welcome, great to have you.
Jo O'Driscoll-Kearney is chief learning officer of Majid Al Futtaim here in the UAE. Jo, thank you for being here as well.
Dr Nela Richardson is chief economist and ESG officer at Automatic Data Processing. That's ADP. And Mr Stuart Russell is the professor of computer science at the University of California.
So, we really do have a deep bench of talent here to take a deep dive into this really critical topic that, no doubt, all of you have also been absorbing and working with through the course of this year and will no doubt be working with for many, many years to come.
So, the intersection of artificial intelligence and human skill is perhaps more critical than ever. As we see these AI technologies that no doubt you've all been working with, starting to reshape industries and redefine job roles, we're going to talk about what skills are essential in this new era.
So, Skills in the Age of AI brings together thought leaders to explore just that. We're going to take a look at the implications of AI on employment, for example, in education and training and beyond.
So, let's kick it off with an opening round question for perhaps all of you. And I'd like each of you to take a go at this question first of all. How do you see AI impacting job creation and disruption, maybe even destruction in the next decade? Abdallah, did you want to kick us off?
Abdallah Abu Sheikh: Sure. Thank you for having us. And I think this is the most common question we get asked now is, am I going to lose my job? Is a robot going to be doing what I do tomorrow morning? And I think there is a few angles to consider this. This is not new, right? This innovation curve has happened before, multiple, multiple times.
And I always like to use the example of Oxford University professors protesting against calculators when calculators first came out and they thought mathematicians were going to be irrelevant.
But at every industrial shift or when we see a new technology like this come out, what happens is it divides the workforce into two: operators and people who fall beyond the operations lines, which means when email was invented, mailmen did not just disappear – we still have mailman.
They just operate at a lower level in terms of income, in terms of compensation and terms and so on and so forth. While you have other people who escalated and became mail operators or technology operators and that sort of, you know, creates that divide.
And I think AI is going to do very much the same, whereby AI is going to take a lot of at the first stance or at the first phase of AI is going to mostly take over the repetitive jobs that are more labour oriented, that AI is very good at things that we do and repetition and that will leave people with a lot more time to do other stuff where I think people are going to have to create another class of jobs and so on and so forth.
So this is my 30 seconds of thoughts on it.
Dan Murphy: Yeah, really fascinating. Jo, what's your take? How do you see AI disrupting, destructing and creating the jobs of the future?
Jo O'Driscoll-Kearney: Yeah, I love this question. I think there's a narrative that's perpetuated that AI is here to take all our jobs. And I think the way we see it at Majid Al Futtaim anyway, is it's really here to elevate the human.
And that it's not AI that's going to take our jobs. It's going to take the jobs of people who don't know about AI. And I think it's very easy to get caught up in the hype cycle of AI. But the reality is we have to have an epistemic humility about this. We don't know exactly where it's going.
And as a result, I think job creation will focus really on those people that are able to adapt and change and pivot and can keep up in the age of AI with the pace that it's going at and have that humility, as I mentioned, to know that we simply don't know where some of this might go and to be ready for where it brings us.
Dan Murphy: Very interesting. Nela, what's your take?
Nela Richardson: First of all, thanks to the audience. And I can tell the interest and I agree with these comments about AI's ability to change how we work. But I think where AI shows up the earliest is with companies.
Most companies are trying to figure out how to manage their human capital. It's their most important asset. That it's their most expensive asset. And I think, what AI does is change the way companies view human capital management?
And maybe even if we get it right, they move from management to development. At ADP, we operate in about 140 countries and we're really keen on how companies manage their workforce.
And we found that looking at 53 million workers over a span of four years, I want you to just think in your head how many of them were actually upskilled by their companies.
Okay, picture that number. Ask yourself if it's double digits or not. And then I'll give you the answer. Less than 4%, less than 4% of these 53 million workers were upskilled by their company in two years.
So there's an enormous opportunity, even in the absence of AI, to lead to a skilled workforce and a workforce that has continuous learning. And so I think what AI does, first and foremost, is change how companies view their talent and hopefully helps them invest in that talent.
Dan Murphy: Okay. Hold that thought. I wanted to come straight back to that. But Stuart, first, what's your view?
Stuart Russell: So I think the answer depends on whether you're looking at AI as it exists now or as it will exist over the next decade. And there are plenty of experts who are predicting that by the end of this decade.
So, six years from now, we will have AI systems that exceed human capabilities in every dimension. So that literally means that there isn't a job that the AI system won't be able to do cheaper and better.
And interestingly, for a long time, economists would give you a theorem with Greek letters saying we can prove that there's no such thing as technological unemployment.
But then you point out, OK, well, suppose I can make a twin of every person and that twin is better at their job and never gets hung over and is willing to work for free. Right. How many of you would still have your job?
And then the economists say, "Ah, I see what you mean?" Yes, there might be more employment. It just wouldn't be employment of humans. And so if you're making policy, it's really tough. Because I would say, you know, roughly half of experts think it's going to happen in the next decade, that we will have this all powerful technology.
And another half, including myself, think it's going to take quite a bit longer than that. And in fact, we're overestimating the capabilities of the technologies that exist at present. And the consequences of scaling those technologies are not as profound as many of the proponents think.
But we are already seeing significant impact. Surprisingly, I think Abdallah is right. A lot of the repetitive jobs, the jobs where you hire people by the hundreds or the thousand as exchangeable robots, right, those jobs are going to be done by real robots.
But we're also seeing impact on creative industries. So graphic artist, freelance writers, there are clear measurements we can make on the online marketplaces where they acquire work, the sort of exchanges and you can see the prices going down on those marketplaces because people are able to use AI to do those jobs in a 10th of the time.
And you know, parents, when I give talks to the general public, it's not so much are they going to take my job, it's what are my kids going to do? What should I teach them? What courses should they take?
And in the near term, there will be demand for AI engineers and robot engineers. But in the long term, it's got to be interpersonal skills. It's going to be a very different kind of economy when the production of the basic wherewithal of life is turned over to machines.
And so think about what that means in terms of almost everyone being self-employed, the kinds of education you need to be good at interpersonal roles and how we succeed in delivering high value in those roles, given that we have you know, we have 200 years of not doing scientific research on how do we live a good life or how to help someone else live a good life. So there's a lot of work to do indeed.
Dan Murphy: And maybe we can talk about this as a panel as well. But perhaps you could also suggest that we are coming into a future where there is going to be a premium on human creativity, on emotional intelligence as well. But Stuart paints this really interesting picture about a more automated future, maybe a more automated workforce moving forward to pick up on your point in life.
What are the skills that are going to be mission critical for workers to acquire and for businesses to develop in order to combat that increasingly automated future?
Nela Richardson: First of all, I'm going to be very candid. It's uncertain. No one can predict the next ten years of AI advancement. In some ways, AI is like driving a car. My car has tons of AI in it. I have no idea what it does, really. I barely use it. I still use the same driver's license I've been using since I was 16.
And for many workers, that's how they'll experience AI. It will just be part of their normal tasks. But there is this sense, right, that soft skills are going to matter a lot. If we think about generative AI, specifically, about content creation, about doing more creative work, more collaborative work, more digital work – work that is more boundless in terms of geographies.
Then cultural awareness is going to matter. Sensitivity to humans is going to matter. Being able to train the AI because the best AI has a person at the centre of it, a human who is giving it feedback, whether it's the human's data or the human's interaction, that feedback mechanism is important for AI's development. Even AI needs a human hand, right?
So the people who have the ability to kind of have broad skills, deep skills and expertise that are transferable, those are the ones that are going to persist. So it's about rather than thinking about us all becoming robotic engineers.
Thankfully, that's not going to have to be the case. It's more how do we go from occupation orientation to task orientation regardless of the occupation and how do we have the agility to move across and up as opposed to just up in one particular expertise?
And how do we communicate and collaborate so we can actually capture the benefits of AI and mass extend it across our different business operations?
And this is not just an easy solution. We know that AI's been around for a really long time and these advancements are long in the making. And yet globally, we've seen productivity slump way before the pandemic. So it's not guaranteed that the match between technology and workers will actually benefit the global economy in a way that raises the standard of living.
What guarantees that is that we make sure that that match happens. And I think that's why companies are going to be so very focused on matching their tech with their people.
Dan Murphy: So, Jo, maybe give us a Majid Al Futtaim perspective on this as well. Obviously, a very important domestic business here in the UAE, a large local employer as well.
How do you update your training programmes as chief learning officer to adapt to this AI-enabled future? And what about when it comes to lifelong learning? People would like to join Majid Al Futtaim and work there for a very long time. How do you enable lifelong learning within the company?
Jo O'Driscoll-Kearney: Yeah, it's such an interesting question because we recently deployed AI Academy and one of the modules was out of date within a week. That's how quick this thing is going.
So, we've really had to think about very fresh renegade recusant ways of deploying learning. And we're really trying with AI now not to just meet the learner where they are but meet them where they can be.
So we're deploying tools like, I don't know if you've heard this, a company called Arist, I promise I don't work for them. But they were actually, you know, again, crisis innovation, born out of crisis. They were born out of trying to get students to access learning during the Yemeni conflict and now they're a Silicon Valley start-up
But they nudge out text message based learning. So that's a wonderful opportunity for us to have front liners all the way up to CEOs where we can really meet people, where they can be.
We know that people then can learn in the flow of work because we can issue messages or updates out via teams, out via WhatsApp, out via text message.
So that's one way that we're really doubling down on trying to use AI to enable learners in a much more kind of user friendly way.
And the reason we're doing things like this is we're seeing more and more, especially with the younger generations, that they're coming into the workplace with what I call liquid expectations, meaning that the way in which they want to deal with the LMS [learning management system] and deal with the learning function is exactly how they want to be treated when they're at Apple or Amazon.
They expect that seamless experience: two clicks, 10 seconds. That's all the attention economy is allowing for now.
So we simply are having to pivot away from these three-day classroom-based events. So what we're doing is trying to get people who are ready and ripe for this before they even come in the door.
And we're doing things like assessing people's learning ability because you you know, you ask this great question about what skills do we need to focus on. I really think the key critical one is how willing is someone to learn?
How well can someone learn? How do they know and build in the principles of learning science within their everyday work, you know, space, repetition, interleaving, etc. and using AI you can assess for this now. So it's done in a very unbiased way.
Dan Murphy: So fascinating. Abdallah, can you speak to that as well? Give us a tech perspective on this, too, particularly when it comes to the things we're discussing: skills required, but also what you look for when you're hiring someone as well.
Abdallah Abu Sheikh: I might get into a bit of trouble for this but I have a very, you know, extreme concrete view on this because I write code every day. This is what we do. And maybe one thing I like to give people as a disclaimer, we do this in our business every day.
We tell them it's nobody's job to keep your job right, especially not as job companies want to do things faster, cheaper and they want people who write code that don't get sick and take some days off. And that's going to remain the fact of the matter for a very long time from now.
So there is this sense of entitlement maybe that I see with a lot of people that is, it's my company's job to keep me employed, even though AI is going to be better than... No, as humans – and why I like to say things are repetitive, as humans, our most important skill historically has been survival. Right? We survive.
That's why we all exist in the shapes and forms that we do today and within the AI age, it's still survival and it's still going to be survival of the fittest. Now, the learned have not been the best survivors and they're not the most successful. And you can see this still in today's world.
The people who fared much better, say financially or in the business world, are not the most learned people but they're the best survivors. They're the people who manage to actually adapt and work around, you know, the changing conditions much faster than than everybody else.
I tend to disagree about there is ambiguity of where AI is going to take us. At least in my line of work, it's extremely clear where we are going to be in 10 years from now. If AI delivers on its promise, we still have a big question mark on when is it going to be: in five years? Is it going to be in 10 years? Are we overestimating the power of commute? Are we under. But it's coming.
We know it's coming and it's going to come in a way where there is going to be a big eradication of a certain layer of employment and a certain layer of work that people are just going to be very inefficient about in doing. And AI is going to be just so much more efficient at doing this that there will be no reason for human in the loop to exist anymore.
So, my thinking is that the best skill or what do people need to learn? And this is another thing that I see that I always say that's very troublesome, is the better technology gets, the worse formal education becomes because the more technology is becoming, the better technology is becoming, the formal education institute is just lagging further and further behind.
I can ask everybody here in the room how much they've used out of their formal education, in their daily jobs and not much for most people who are not, super, super technical in their jobs.
So with AI, this is not the kind of education that you're going to be getting in university. And this is not a 40-year programme that you're going to go into and go out to ready for the AI age like it was just mentioned, a model took a week to become outdated.
Imagine about spending four years in university studying about something, by the time you graduate, it's history. It's ancient science.
So I believe the formal education institute is going to have to reconfigure itself out how it impacts us personally at Astra, the hiring age dropped significantly. That's something that I found that has been very interesting.
So we really are starting to look at people in their very early 20s, maybe, you know, 19, 18, joining and becoming the most efficient, the fastest producing developers that we have. And this is because they're all self-taught.
They haven't followed any set of instruction that promise them, you know, just finish this four-year course at Harvard and I'm going to be the smartest guy in the room, none of that.
So I think this is very significant change that is happening. And I think that's going to impact all the way across the board. So if there is one thing I can speak to with full confidence, is the focus needs now to be not on how to engineer the robotics but to really understand what is this robotic or what is this idea of ability going to be able to do?
And just very quickly to finish off with an example, I was asked by one of the most prominent bank CEOs of our region is what are banks going to be?
You know, what is the AI bank of the future going to look like? And I'm like, I don't think an AI bank or the bank of the future is going to be selling bread, it's still going to be doing banking services but it's going to be able to write you a loan in 30s instead of three months and is going to use one person instead of 300 people.
That's what the bank of the future will look like. And you can apply that across the board to almost every other industry.
Dan Murphy: OK. I might get Stuart to respond to that, first of all. And so as an extension to what Abdallah has just said, are academic institutions getting it right?
Because we're talking about the length of time it takes to acquire maybe a more traditional degree. Are institutions responding fast enough to the pace of change that we're seeing in this technology?
Stuart Russell: Absolutely not. And I think this is one of the reasons why we should have started planning for this 20 years ago when we could start to see on the horizon that these kinds of changes were coming down the pipe because academia is notoriously slow to change.
So Oxford University, where I did my undergraduate degree, first discussed having a geography major in 1851, I think and it took them 125 years to get through the processes and finally agree that they should have a geography degree by which time satellites had made most of traditional geography completely irrelevant.
And so if you think about the future and I agree with Abdallah, right. It is going to come down the pipe. We don't know when but we will have general-purpose AI at some point and almost certainly, we will have it within the time frame for academic change, right, which we might say is 20 to 30 years to make serious change in what kinds of degrees we offer, how we teach them and and where we get the content right.
If we're going to be training people for all kinds of interpersonal professions, then we need the learning science, right? The science of human psychology. And no offence to any psychologists or learning scientists in the audience but it's a more dismal science even than economics.
We just really don't know very much about what makes people tick, why some people respond to some kinds of education well and others require totally different kinds of teaching.
And that view, that the human is not a commodity but is, you know, 8 billion individuals and in some sense, science needs to know how do you map from the characteristics of the individual to how they learn and what's the best way to help them learn?
That's going to take decades to develop that science. And we have to again, we should have started 20 years ago. Instead, we devoted hundreds of billions of dollars to the cell phone other kinds of innovation, which has had a mixed blessing.
So, in the university right now, we're not even able to cope with the impact of llarge language models doing people's homework. And this is even worse in high school. And one of my colleagues at Berkeley has a rule that says, OK, you have to use ChatGPT or some equivalent engine to write your essay.
But if you turn that in, you get zero. You're graded on your ability to improve on the output of the AI system. And that sounds like a pretty interesting and enlightened approach. But if you think about applying that in high school, 90% of kids in high school cannot improve on the output of a large language model.
And then you think about the implications of that, right, for motivation. The calculator analogy is often used but calculators automate precisely the part of mathematics that is brainless. Right.
How many of you actually understand what's going on in every step of a long division calculation that you do by hand? Right. You remember those things. You put this big like bar thing and then you put arrows and you move digits around.
I didn't understand what the heck was going on. And you make a mistake. You don't even realize because you don't understand it. Right. It's a brainless recipe and the calculator automated that.
But understanding a question and formulating an answer, an essay, that's the essence of learning to think. So if you automate that, you're just cutting out exactly what we want human beings learn how to do.
Dan Murphy: And I mean, you could also ask AI to think critically for you, though, couldn't you?
Stuart Russell: You could. I mean, what I would really like, actually, you know, even if I failed to deliver the superhuman AI that a lot of people are promising, I think the technology we have in adapted in the right way could the killer app is it could deliver personalized education at least through the end of high school where it acts as a tutor, not as something that does your homework for you but something that says, OK, let's think about that.
You know, what do you think about this idea? Where could we find answers? What kind of research could we do to look into this question? And that could have enormous value for the human race because there are many countries that cannot afford a K through 12 education system. There are a number of countries where they can't even afford K.
So, literally there is no schooling available to kids in some countries unless they can afford to pay for it. And you know, through a cell phone, which is still not universally available but widely available, you could deliver a quality of education that would exceed what I could get at the best schools in the UK or the US.
And we're not doing it right now because the financial incentives to work on education as opposed to advertising are very small. And it's a very complicated place to try to make money.
Dan Murphy: Just quickly, while we're on this topic. One more question for you on this in particular. If you were to be enrolling in an academic institution in a university at the start of the new year, what is the course you would take and why?
Stuart Russell: Oh to be 18 again. So I think probably something in the human sciences psychology or child development or something like that because I think in the long run it's the human sciences that we're going to depend on to have a functioning society.
I would like also to say the humanities because, you know, a huge part of what it means to be a human and to have a rich life depends on art and literature and that type of learning.
I think the humanities have taken a bit of a detour in the last few decades but they could get it together and contribute to this process enormously.
Dan Murphy: Alright. Fascinating. Nela, I'll get you to respond to that as well, because one thing you think quite deeply and also quite critically about is this question of distrust.
So we see AI and these lands providing us with answers, sometimes with citations, often not. Why should we trust AI?
Nela Richardson: It's a great question and why a philosophy degree might come in handy in the new world.
I think judgement is going to be a critical feature of what it means to be human in AI because not all of AI is swell, we know that there are hallucinations and if AI does develop an absence of human intervention or human influence, then how do you trust that what you're seeing is actually real? And that's going to be a key question.
And how do you trust your universities to deliver the skills that you need to advance? If you look around the world, we don't really have a worker shortage, we have a skills shortage. And it's not in tech per say.
What you see in tech is that the skills required are becoming much more narrow, specific and focused. So a computer scientist, 100,000 that graduated from the US in the last year, you know, those skills are quickly being eradicated.
But what prevails is the care economy, the ability to build a cabinet or be a great plumber. Right now, AI is not really addressing the fact that I need to fix my toilet. And so there are skills in the workforce that are still going to be really, really needed.
And the question is, how do you get workers positioned for those skills and how do you get workers to transition to other skills, especially as we know that skills are being rapidly created and destroyed? And to get AI to actually build jobs that humans can't do, to do tasks that we can't do as opposed to replace tasks that we can do, that's going to be the questions of tomorrow.
And so that really does boil down to trust because your university may not be your last educator, your collaborator may be AI, it may be your colleague, it may be someone across the world from you, it may be your firm.
And so what we've done is we've looked at this question of trust. We have over the last decade, about 500,000 workers around the world. And we ask, what makes you trust the companies that you work for? Well, it comes down to a few things.
One, I trust my manager, it's basic human relationships. I trust that my team has my back. I see myself represented in leadership. That's a big, important thing. And so if you can boil down this measure of trust, you can actually nurture trust.
Why does that matter? Because what you're asking workers to do is trust that the skills that are being created, the tasks that are being replaced and the new tasks that are coming to the fore will make them have a better standard of living.
Why should they trust that? And what happens if they don't trust? Then they reject the technology. When you roll out that technology as a company, as an NGO, as a university, you know what you'll get, an eye roll in return.
I'm not doing that. I'm going to block that technology. I'm going to prevent that technology from coming to my state. I'm going to rally against that technology because I fear it.
And so we have to give workers something to trust. And I think that starts with this interpersonal level in communication. How you roll out that technology is going to be important. The adoption of that technology for the workforce will lead to the proficiency of the technology.
It's not like this tech can occur absent a business case or a market or a customer. It has to have a purpose. And people are what give technology a purpose.
Dan Murphy: So excellent point. Really, really, really important point. Jo, would you agree with that as well?
Jo O'Driscoll-Kearney: Yeah, I mean, I think that you need to you know, we've been talking a little bit about the power of critical thinking and judgement. And something I've started to do is ask ChatGPT, for example, as one tool to disprove what it's saying to me and find evidence on the contrary of what it's offering me.
Because if you don't deploy that critical mindset and you know, there's so many stories in the press that we've seen even in litigation whereby people have gone forward with things that just simply aren't correct.
And, you know, it makes me think of the fact that you know, we have this we almost see sometimes in human capital as being this utopia of it's going to bias things for us and enable more equity in the workplace. But the reality is that he was training these AI systems.
Well, it's us and we love them. We're doing DEI training or inclusion and belonging training to say it's kind of everyone else that needs that. I'm not really biased myself but if you have a brain, you have bias. And we are training these systems with bias.
And I think to come back to one of the questions you asked me earlier, I think in addition to learn ability, the ability to be metacognitive, to step back and to think about how you think is going to be more important than ever in this age of AI.
Dan Murphy: Also very, very true. Abdallah, to flip it over to you, I think one of the other things, I'm sure you're also thinking about right now is similar to my question to Stuart about if you had your time again to start a business from scratch, where would you begin?
Because, look, your business is well-funded. You're backed by G42, well-established in the country now, if you could do it all over again, where would you start?
Abdallah Abu Sheikh: I thought you were going to ask me if I would go to a good school again. I wouldn't – I wouldn't go to school if I had the choice to do that again. But if I were to start a business right now, I think the fundamentals of business, at least in technology, have completely changed.
So it's atomic teams more now if we look at the past, in order for you to be $1 billion business, you had to have tens of thousands of people. Now the focus is can you do it with two people? Can you do it with three people? Can you do it with 10 people?
And I know when we're looking at an acquisition or at a company, the lesser people the better in. So atomic teams and things that don't really need a lot of people to operate. That's one thing.
And I think phase one of AI, at least what we see for sure now, phase two, superintelligence, general intelligence and what have you, that is still yet to be tested and provided for. But phase one of the AI is going to be the most repetitive, boring tasks.
The things that we as humans do that we don't want to do, which is move all those chairs from here to there, which is something that someone does 100 times a day and they don't really want to be doing and it's a brainless task.
It doesn't really take a lot of cognitive ability to understand that you need to move a chair from left to right 1,000 times in consequence.
AI is very, very, very good at automating those things, call centres, stuff like that. So that phase of the AI is definitely where the ripest fruits or the lowest hanging fruits are going to come out now and where you see a lot of companies really making massive financial gain, at least in the next two to three years and then it's going to shift again.
And then phase two is going to be a bit more complex, which is things that need a little bit of critical thinking, things that need a little bit of thought processes and a little bit of intelligence to them.
And then eventually within ten years or maybe a bit longer, it depends on how long it takes us to get there. It will be the actual general intelligence where the AI will be able to communicate with different AIs and do things accordingly.
So where I would focus for the next two or three years is what are the most repetitive, the most boring tasks that things that people don't really want to be doing that need to be automated. And maybe a big hint for everyone, tax is something that nobody really wants to do that I think I can do a very good job at.
And maybe I just want to touch on your trust point a little bit. I always hear this and I don't know if it's just me that's cynical but humans don't trust each other. How are we expecting to come up with a system that everybody trusts? There's obviously bias. Now, your question is not do I trust the technology or not? The question is who's bias do I trust?
Because we as humans, there's very little you know, we agree on certain things but if there's anything history tells us, there's a lot we disagree on as well.
And there's a lot of trust in the human form and the human nature. And why is it the expectation that some technology is going to come out and we are going to unanimously trust this technology that's never going to happen? That's almost impossible to achieve.
So I think this question of do I trust or do I trust not trust? We should have decided that 20 years ago before we started using phones that know everything about us.
It's a bit too late for the trust question right now and I think we're going to be sort of forced to trust whoever achieves faster in this world. I know a lot of people don't trust it but it's the available option and a lot of people don't trust the iPhone, but it's the available option.
And I think commercially that's what's going to happen. Whoever is going to hit the market faster is going to grab more market share. And whether we trusted or not is going to be one question for the philosophy books. It's not going to be it's not going to have a very big practical impact on the economics of what AI would look like. At least that's what I think.
Nela Richardson: I would have to disagree with that.
Stuart Russell: Me too.
Nela Richardson: There is an argument to be made that AI actually makes trust harder because we don't know. And the way that information is disseminated is actually becoming less trustworthy than it has in the past. And credentialed sources are no longer at the forefront of where people access their information.
So how do we reverse that? And when I think about what the future holds, it's not like AI just happens to us that we are passive agents and how AI develops. We're actually very active agents and the development of AI.
So I think that's an important point too and that's why my focus is on people in the workforce because that's what determines the future. And if we want a future in which we can trust what we see and where knowledge is important, then we have to act now.
So I will just push back a little bit that trust isn't important and that is predetermined. Nothing in the human experience is predetermined. And so we really need to act now to get the future we want, not the future that we deserve because we didn't act at all.
Dan Murphy: Did you want to weigh in on that as well, Stuart?
Stuart Russell: Just a little, Yeah. I mean, I think this issue of trust is is important in the the critical paths for companies and the boards of companies that I sit on, all big companies are saying, how can we use this new technology?
And I can tell you, having lived through this in the 80s, that was happening in the mid 80s as well with what we called expert systems. That was a type of AI that thousands of companies set up divisions to figure out how to use expert systems in their operations.
And what I'm seeing now is that the trust issue is preventing companies from putting anything AI in the critical path. So, for example, if you're an insurance company, would you allow an AI system to have a conversation with a customer leading to a transaction? No, right.
I mean, just look at sycophancy where the customer says, you know, could I have a 99% discount? Sure. Absolutely. In fact, we'll send you money along with the policy. Right. And this has happened. You know, there was a GM chatbot that sold someone a truck for a dollar. Right. So you cannot do that.
You know, Air Canada was actually in court because their chatbot told a customer that they could take a bereavement flight to go to a funeral and then apply for a refund afterwards. And Air Canada said, well, we don't have any such policies, so we're not paying.
And the judge said, Sorry, your chat bot said you had that policy, so you are paying. And that judgement, as you know, has caused a chill through corporate adoption in any kind of high-stakes situation. And that's where the money is – is the high-stakes stuff, right?
I also want to further something that Nela said about where we should think about applying AI if we want to get away from this idea that we're just going to put humans out of jobs and that's how we're going to make money, right, is to look at exactly the things, the unmet needs, right?
If you look at met needs where humans are already fulfilling that need, then typically that need is saturated. Right. And this has happened, for example, with cars. Initially when we developed the car, employment absolutely skyrocketed. We had millions of people working in the car industry.
But as you then introduced automation, the cars got cheaper and they sold even more cars and they needed even more workers. But at some point. That need is met, right? Everyone in the US who really cares has a car. And the same is true for most developed countries.
And so adding productivity, adding automation just reduces the number of workers. And we've seen this, the number of people working in the car industry has collapsed. And this happens in industry after industry after industry.
First of all, technology creates demand because it reduces prices. And then demand saturates and then employment goes down again. And so if you want to be on the upside, you've got to look for those unmet needs. And individual tutoring of children is one of those unmet needs because it's incredibly expensive for humans to do it. We can't possibly have a tutor for every child. It just doesn't work.
But other things like, you know, cleaning up graffiti, like inspecting cargo containers, we just can't afford to have people do that. But we could afford to have people do that if they had a team of robots to expand their productivity in the role.
Dan Murphy: OK. We have about one minute left on the panel, so I wanted to do one quick rapid-fire question. Do you want to finish this up? As our wonderful audience leaves the room, what's one thing they should be thinking about in the air space?
And is there anything that keeps any of you up at night when you think about it yourselves? Stuart, I'll start with you again. We have one minute, so super quickly, please.
Stuart Russell: What keeps me up at night is that we develop superhuman AI without a solution for how to control it forever. The thing that people need to know, I guess, about AI: don't believe everything you read. These systems are much stupider than they're made out to be.
Nela Richardson: AI doesn't have to be scary. Sometimes change is a challenge but sometimes it's an opportunity for growth. So I think we're the master of this technology. Let's not let it master us. And it's going to depend on the use case of the technology and how it indeed helps a business or a worker as opposed to technology for technology's sake, that is kind of thrust upon people.
So if you really want to understand the power of AI, go talk to a 12-year-old and watch them interact with ChatGPT 4 and you'll see where our direction is going because young people get this very, very well.
Dan Murphy: Jo.
Jo O'Driscoll-Kearney: And I would say that be resolute that learning at work is work. Learning is probably the most celebrated, neglected activity in the workplace and the learning curve really will be the earning curve.
And it's those people that are able to pivot on a dime, move to something new, when AI surprises us, as I really believe that it will continue to do so.
So I would say be your own activist for your learning. That was another one of my roles in this famous AI Academy I've mentioned. The first rule of AI Academy is don't wait for AI Academy. Build your own personal learning cloud on your learning and grow by giving and upskill others.
Dan Murphy: Abdallah.
Abdallah Abu Sheikh: Yeah, I think I completely echo and I think maybe 2 cents of advice to everyone is keep an eye on the pace of change because things are going to start changing much, much faster than we expect them to.
So you need to be very resolute about how fast you're going to be very frustrated that what you learned today and spend a lot of time learning is going to be obsolete in like two months. So, just keep an eye on the rapid learning curve coming up.
Dan Murphy: Stay positive and look after each other. Alright. We'll wrap it up there. Ladies and gentlemen, please thank my panel. They have been absolutely fantastic. Thank you all. If you're watching online. Thank you as well for joining us. And stay tuned to the World Economic Forum for more news coming up. Thank you all very much.
Vijay Eswaran
December 16, 2024