Worried about AI? Relax it's dumber than you think
The panel said that even as new tech arrives it is not always clear what the effect will be ultimately. Image: REUTERS/Fabrizio Bensch
Robots that serve dinner, self-driving cars and drone-taxis could be fun and hugely profitable. But don’t hold your breath. They are likely much further off than the hype suggests.
A panel of experts at the recent 2017 Wharton Global Forum in Hong Kong outlined their views on the future for artificial intelligence (AI), robots, drones, other tech advances and how it all might affect employment in the future. The upshot was to deflate some of the hype, while noting the threats ahead posed to certain jobs.
Their comments came in a panel session titled, “Engineering the Future of Business,” with Wharton Dean Geoffrey Garrett moderating and speakers Pascale Fung, a professor of electronic and computer engineering at Hong Kong University of Science and Technology; Vijay Kumar, dean of engineering at the University of Pennsylvania, and Nicolas Aguzin, Asian-Pacific chairman and CEO for J.P.Morgan.
Kicking things off, Garrett asked: How big and disruptive is the self-driving car movement?
It turns out that so much of what appears in mainstream media about self-driving cars being just around the corner is very much overstated, said Kumar. Fully autonomous cars are many years away, in his view.
One of Kumar’s key points: Often there are two sides to high-tech advancements. One side gets a lot of media attention — advances in computing power, software and the like. Here, progress is quick — new apps, new companies and new products sprout up daily. However, the other, often-overlooked side deeply affects many projects — those where the virtual world must connect with the physical or mechanical world in new ways, noted Kumar, who is also a professor of mechanical engineering at Penn. Progress in that realm comes more slowly.
At some point, all of that software in autonomous cars meets a hard pavement. In that world, as with other robot applications, progress comes by moving from “data to information to knowledge.” A fundamental problem is that most observers do not realize just how vast an amount of data is needed to operate in the physical world — ever-increasing amounts, or, as Kumar calls it — “exponential” amounts. While it’s understood today that “big data” is important, the amounts required for many physical operations are far larger than “big data” implies. The limitations on acquiring such vast amounts of data severely throttle back the speed of advancement for many kinds of projects, he suggested.
In other words, many optimistic articles about autonomous vehicles overlook the fact that it will take many years to get enough data to make fully self-driving cars work at a large scale — not just a couple of years.
Getting enough data to be 90% accurate “is difficult enough,” noted Kumar. Some object-recognition software today “is 90% accurate, you go to Facebook, there are just so many faces — [but there is] 90% accuracy” in identification. Still, even at 90% “your computer-vision colleagues would tell you ‘that’s dumb’…. But to get from 90% accuracy to 99% accuracy requires a lot more data” — exponentially more data. “And then to get from 99% accuracy to 99.9% accuracy, guess what? That needs even more data.” He compares the exponentially rising data needs to a graph that resembles a hockey stick, with a sudden, sharply rising slope. The problem when it comes to autonomous vehicles, as other analysts have noted, is that 90% or even 99% accuracy is simply not good enough when human lives are at stake.
Exponentially More Data
“To have exponentially more data to get all of the … cases right, is extremely hard,” Kumar said. “And that’s why I think self-driving cars, which involve taking actions based on data, are extremely hard [to perfect]…. “Yes, it’s a great concept, and yes, we’re making major strides, but … to solve it to the point that we feel absolutely comfortable — it will take a long time.”
So why is one left with the impression from reading mainstream media that self-driving cars are just around the corner?
To explain his view of what is happening in the media, Kumar cited remarks by former Fed chairman Alan Greenspan, who famously said there was “irrational exuberance” in the stock market not long before the crash of the huge tech stock bubble in the early 2000s. Kumar suggested a similar kind of exaggeration is true for today for self-driving cars. “That’s where the irrational exuberance comes in. It’s a technology that is almost there, but it’s going to take a long time to finally assimilate.”
“To have electric power and motors and batteries to power drones that can lift people in the air — I think this is a pipe dream.”–Vijay Kumar
Garrett pointed out that Tesla head Elon Musk claims all of the technology to allow new cars to drive themselves already exists (though not necessarily without a human aboard to take over in an emergency) and that the main problem is “human acceptance of the technology.”
Kumar said he could not disagree more. “Elon Musk will also tell you that batteries are improving and getting better and better. Actually, it’s the same battery that existed five or 10 years ago.” What is different is that batteries have become smaller and less expensive, “because more of us are buying batteries. But fundamentally it’s the same thing.”
Progress has been slow elsewhere, too. In the “physical domain,” Kumar explained, not much has changed when it comes to energy and power, either. “You look at electric motors, it’s World War II technology. So, on the physical side we are not making the same progress we are on the information side. And guess what? In the U.S., 2% of all of electricity consumption is through data centers. If you really want that much more data, if you want to confront the hockey stick, you are going to burn a lot of power just getting the data centers to work. I think at some point it gets harder and harder and harder….”
Similar constraints apply to drone technology he said. “Here’s a simple fact. To fly a drone requires about 200 watts per kilo. So, if you want to lift a 75-kilo individual into the air, that’s a lot of power. Where are you going to get the batteries to do that?” The only power source with enough “power density” to lift such heavy payloads is fossil fuels. “You could get small jet turbines to power drones. But to have electric power and motors and batteries to power drones that can lift people in the air — I think this is a pipe dream.”
That is not to say one “can’t do interesting things with drones, but whatever you do — you have to think of payloads that are commensurate what you want to do.”
In other areas, like electric cars, progress is moving along smartly and Kumar says there is lots of potential. “The Chinese have shown that, they are leading the world. The number of electric cars in China on an annual basis that are being produced is three times that of the U.S…. I do think electric cars are here to stay, but I’m not so sure about drones using electric power.”
Picking up on Kumar’s theme, Fung, who also helps run the Human Language Technology Center at her university, outlined some of the limits of artificial intelligence (AI) in the foreseeable future, where again the hype often outruns reality. While AI may perform many impressive and valuable tasks, once again physical limitations remain almost fixed.
“… A deep-learning algorithm that than can do just speech recognition, which is translating what you are saying, has to be trained on millions of hours of data “and uses huge data farms,” Fung noted. And while a deep-learning network might have hundreds of thousands of neurons, the human brain has trillions. Humans, for the time being, are much more energy-efficient. They can work “all day on a tiny slice of pizza,” she joked.
“The jobs safest from robot replacement will be those at the top and the bottom, not those in the middle.”–Vijay Kumar
The Human Brain Conundrum
This led to the panelists to note a second underappreciated divide: the scope of projects that AI can currently master. Kumar pointed out that tasks like translation are relatively narrow. We have “figured out how to go from data to information to some extent, though … with deep learning it’s very hard even to do that. To go from information to knowledge? We have no clue. We don’t know how the human brain works…. It’s going to be a long time before we build machines with the kind of intelligence we associate with humans.”
Not long ago, Kumar noted, IBM’s supercomputer Watson could not even play tic tac toe with a five-year-old. Now it beats humans at Jeopardy!. But that speedy progress can blind us to the fact that computers today can best handle only narrow tasks or “point solutions. When you look at generalizing across the many things that humans do — that’s very hard to do.”
Still, the stage is being set for bigger things down the road. To date, getting those narrow tasks that have been automated have required humans to “learn how to communicate with machines,” and not always successfully, as frustration with call centers and often Apple’s Siri suggests, noted Fung.
Today, the effort is to reverse the teacher and pupil relationship so that, instead, machines begin to learn to communicate with humans. The “research and development, and application of AI algorithms and machines that will work for us,” cater to us, is underway, Fung said. “They will understand our meaning, our emotion, our personality, our affect and all that.” The goal is for AI to account for the “different layers” of human-to-human communication.
“We look at each other, we engage each other’s emotion and intent,” said Fung, who is among the leaders worldwide in efforts to make machines communicate better with humans. “We use body language. It’s not just words. “That’s why we prefer face-to-face meetings, and we prefer even Skype to just talking on the phone.”
Fung referenced an article she wrote for Scientific American, about the need to teach robots to understand and mimic human emotion. “Basically, it is making machines that understand our feelings and intent, more than just what we say, and respond to us in a more human way.”
Such “affective computing” means machines will ultimately show “affect recognition” picked up from our voices, texts, facial expressions and body language. Future “human-robot communication must have that layer of communication.” But capturing intent as well as emotion is an extremely difficult challenge, Fung added. “Natural language is very hard to understand by machines — and by humans. We often misunderstand each other.”
So where might all this lead when it comes to the future of jobs?
Machines Are Still ‘Dumb’
“In the near future, no one needs to worry because machines are pretty dumb….” Kumar said. As an example, Fung explained that she could make a robot today capable of doing some simple household chores, but, “it’s still cheaper for me to do it, or to teach my kids or my husband to do it. So, for the near future there are tons of jobs where it would be too expensive to replace them with machines. Fifty to 100 years from now, that’s likely to change, just as today’s world is different from 50 years ago.”
But even as new tech arrives it is not always clear what the effect will be ultimately. For example, after the banking industry first introduced automatic teller machines [ATMs], instead of having fewer tellers “we had more tellers,” noted Aguzin. ATMs made it “cheaper to have a branch, and then we had more branches, and therefore we had more tellers in the end.”
“With blockchain technology, eventually the cost of doing a transaction will be ‘like sending an email, like zero.’ Imagine applying that to trade finance.”–Nicolas Aguzin
On the other hand, introducing blockchain technology as a ledger system into banking will likely eliminate the need for a third-party to double-check the accounting. Anything requiring reconciliation can be done instantly, with no need for confirmation, Aguzin added. Eventually the cost of doing a transaction will be “like sending an email, it will be like zero … without any possibility of confusion, there’s no cost. Imagine if you apply that to trade finance, etc.”
Already, Aguzin’s bank is about to automate 1.7 million processes this year currently being done manually. “And those are not the lowest-level, manual types of jobs — it’s somewhere in the middle.” In an early foray in affective computing, his bank is working on software that will be able to sense what a client is feeling and their purpose when they call in for service. “It’s not perfect yet, but you can get a pretty good sense of how they are feeling, whether they want to complain or are they just going to check a balance? Are they going to do x, y — so you save a lot of time.”
Still, said he remains confident that new jobs will be created in the wake of new technologies, as was the case following ATMs. His view about the future of jobs and automation is not as “catastrophic” as some analysts’. “I am a bit concerned about the speed of change, which may cause us to be careful, but … there will be new things coming out. I tend to have a bit more positive view of the future.”
Fung reminded the audience that that even in fintech, progress will be throttled by the available data. “In certain areas, you have a lot of data, in others you don’t.” Financial executives have told Fung that they have huge databases, but in her experience, it often is not nearly large enough to accomplish many of their goals.
Kumar concedes that today we are creating more jobs for robots than humans, a cause for concern for the future of jobs for humans. But he also calls himself a “pathological optimist” on the jobs issue. AI and robotics will work best in “applications where they work with humans.” Echoing Fung, he added that “it’s going to take a long time before we build machines with the kind of intelligence associated with humans. When it comes to going from “information to knowledge, we have no clue. We don’t know how the human brain works.”
Security at the Top — and Bottom
Picking up on Fung’s point that many lower-skill level jobs likely will be preserved, Kumar added that the jobs most likely to be eliminated could surprise people. “What is the one thing that computers are really good at? They are good at taking exams. So, this expectation of, oh, I got a 4.0 from this very well-known university, I will have a job in the future — this is not true.” At the same time, for robots “cleaning up a room after your three-year old is just very, very hard. Serving dinner is very, very hard. Cleaning up after dinner is even harder. I think those jobs are secure.”
The panel’s consensus: The jobs safest from robot replacement will be those at the top and the bottom, not those in the middle.
What about many years down the road, when robots become advanced enough and cheap enough to take over more and more human activities. What’s to become of human work?
“You will still want to read a novel written by a human even though it’s no different from a novel written by a machine someday. You still appreciate that human touch.”–Pascale Fung
For one thing, Fung said, there will be a lot more AI engineers “and people who have to regulate machines, maintain machines, and somehow design them until the machines can reproduce themselves.”
But also, many jobs will begin to adapt to the new world. Suppose, for example, at some point in the distant future many restaurants have robot servers and waiters. People will “pay a lot more money to go to a restaurant where the chef is a human and the waiter is a human,” Fung said “So human labor would then become very valuable.”
She added that many people might “become artists and chefs, and performing artists, because you still want to listen to a concert performed by humans, don’t you, rather than just robots playing a concerto for you. And you will still want to read a novel written by a human even though it’s no different from a novel written by a machine someday. You still appreciate that human touch.”
What’s more, creativity already is becoming increasingly important, Fung notes. So, it’s not whether AI engineers or business people will be calling the shots in the future. “It’s really creative people versus non-creative people. There is more and more demand for creative people.” Already, it appears more difficult for engineering students “to compete with the best compared to the old days.”
In the past, for engineers, a good academic record guaranteed a good job. Today, tech companies interview applicants in “so many different areas,” Fung added. They look beyond technical skills. They look for creativity. “I think the engineers have to learn more non-engineering skills, and then the non-engineers will be learning more of the engineering skills, including scientific thinking, including some coding….”
Kumar agrees. Today, all Penn engineering students take business courses. “The idea of a well-rounded graduate, the idea of liberal education today, I think includes engineering and includes business, right? The thing I worry about is what happens to the anthropologist, the English majors, the history majors … I think those disciplines will come under a lot of pressure.”
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Artificial Intelligence
Forum Stories newsletter
Bringing you weekly curated insights and analysis on the global issues that matter.
More on Emerging TechnologiesSee all
Michele Mosca and Donna Dodson
December 20, 2024