This is what artificial intelligence will look like in 2030, according to one of the world’s leading experts
How is the world of AI and robotics changing society, and changing us? Image: REUTERS/Vincent Kessler
Artificial intelligence and robotics are coming into our lives more than ever before and have the potential to transform healthcare, transport, manufacturing, even our domestic chores. Mary “Missy” Cummings, Director of the Humans and Autonomy Lab (HAL) at Duke University, and co-chair of the Global Future Council on Artificial Intelligence and Robotics, says the technology will work best in collaboration with humans. While cab drivers may fear for their jobs, she envisages a worldwide shortage of roboticists in 2030.
First of all, why should the world care about AI and robotics?
Artificial intelligence and robotics are showing up in every part of life, anywhere from driving, to the cellphones we use, how our data is managed in the world, how our homes are going to be built in the future. So given its ubiquity, it really is important to start addressing the strengths and limitations of artificial intelligence.
Tell me about the technological breakthroughs we have already seen, and what you expect to see in the coming years?
We’ve seen a lot of breakthroughs in data analytics. The example of Watson – which is an IBM set of algorithms – has been very impressive in terms of managing large amounts of data, and how to structure the data so that you can see patterns that may have not emerged otherwise. That has been an important leap. But oftentimes people confuse that leap with machine intelligence and the way that we think about intelligence for humans and it’s simply not true. So the big leaps that we have had recently in data analytics are important but also leave a lot of room for humans to assist these systems. I think the wave of the future is the collaboration of humans and these artificial intelligence technologies.
How is the world of AI and robotics changing society, and changing us?
In a way it is making us smarter because we are able to leverage computers to search these databases in ways that we couldn’t before. It will change, for example, healthcare because we’re going to see these machine learning techniques try to get a better understanding of what symptoms might lead to certain diseases.
Right now, artificial intelligence is not nearly as smart as people would like it to be. We’re nowhere near a car that can drive itself under all conditions at all times, but we will see cars that can drive themselves very reliably under slow conditions and in environments that are relatively structured, on freeways, for example, with additional sensors that we can put in the roads.
What do you see as the ethical implications of AI and robotics?
We need to be sure that the decision logic that we programme into systems is what we perceive to be ethical and then, of course, that the sensors can actually detect the world as it is. So we’re nowhere near letting robots release weapons because their ability to detect a target with a high degree of certainty is not good.
There is a lot of argument right now concerning driverless cars, about how Google has programmed an algorithm to hit a building before it hits a person. It is interesting to think about this idea of utilitarianism; should we go for the greater good, or should we work from the respect for persons approach? Why is it that a pedestrian gets a higher priority than me having to be slammed into a building? I think humans actually can live with the fact that we can be killed by another human driving a car, but we cross some imaginary boundary when we think that it’s a computer that decides to take our life over another life.
To what extent are regulations and governance keeping pace with new technologies? What more needs to be done?
In the United States, the regulatory agencies have, in general, not kept pace with the technology. It’s becoming even more of a problem now because the government can’t hire people who understand how these systems operate under the hood because the systems are largely software-driven.
The regulatory environment is going to become more and more contentious. For physics-based systems, like a new physical bomb, we can test that; we understand what the mechanisms are, we can have inspection teams go in. But with software, it is actually very difficult to understand whether or not code is safe and how it works. These artificially intelligent systems never perform the same way twice, even under the exact same conditions, so how do we test that? How do we know there are any guarantees of safety? This is going to become a thornier issue as we go forward.
Where will we be in 2030? How will robotics and AI have changed our lives?
We’ll see more technology in terms of smart homes that understand your behaviour and change the heat and do various tasks around the home. You’ll see medicine improve. You will see limited driverless car markets that provide some local transportation options.
We will live in an improved world but we’re also going to have to start grappling with the issues of job displacement, if more and more taxi cab drivers lose their jobs, if more and more manufacturing technologies go over to 3D-printers and robotics. We’re going to see a global shift in low-wage, low-skilled jobs. So in 2030, we’re going to have a much bigger debate on what we do with people who need retraining. In concert with that, you’re going to see companies held hostage by the need to have hard-to-find roboticists and PhDs in artificial intelligence attend to, maintain, and fix these systems.
The Annual Meeting of the Global Future Councils is taking place on 13-14 November in Dubai.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Values
Related topics:
The Agenda Weekly
A weekly update of the most important issues driving the global agenda
You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.
More on Emerging TechnologiesSee all
Filipe Beato and Jamie Saunders
November 21, 2024