How artificial intelligence is getting even smarter
Artificial intelligence can already do remarkable things – but it could do much more if it could perceive, learn and think like a human. D. Scott Phoenix discusses how Vicarious, one of the World Economic Forum’s 2015 class of Technology Pioneers, is working to make that leap.
How does the kind of artificial intelligence (AI) which Vicarious is working on differ from the kind of AI that is currently out there?
Most AIs of today, like IBM’s Watson, Apple’s Siri or Google’s self-driving car, are designed to accomplish very narrow tasks. Each of these AI systems performs well in their specific problem domain, but they are not able to easily learn how to do new things. In contrast, Vicarious is building a single, unified system that will eventually be generally intelligent like a human. That means a machine that will be able to make sense of the world around it, building a complex and nuanced model of reality based on past experience and current sensory data.
To understand how this compares to traditional narrow AI, it’s helpful to look at evolution’s first attempts at intelligence in older animals like amphibians and insects. Frogs and ants are able to perform complex tasks, but most of their behaviour is based on simple assumptions that are easily fooled. One of my favourite recent examples of this kind of narrow animal intelligence is a species of beetle that makes itself at home in ant nests, feeding on workers and larva. Ants are helpless against this particular beetle because it mimics the sounds of a queen ant, fooling the ants around it despite the beetle’s very unqueen-like appearance and behaviour.
The narrow AI of today has similar problems. For example, self-driving cars have difficulty navigating novel environments like parking garages, and the personal assistants in our phones are often confused by requests that a real admin would have no trouble completing. The jump we have to make is really analogous to nature’s leap from reptile to mammalian brains. At Vicarious we are creating the technology to power the transition from narrowly intelligent systems like Siri and Roomba to generally intelligent ones.
How do we get to the next big revolution in AI?
The first prerequisite is processing power. A human brain, for example, has about a thousand times as many neurons as a frog brain. Whereas it took evolution about 250 million years to achieve a thousand time increase in processing power, our computers improve a thousand times every 10 years or so. Even today, we have a tremendous amount of underutilized computational power.
The more challenging task is understanding and replicating the function of the neocortex, the part of the brain that allows humans to learn and reason. Vicarious is building a mathematical model of the human brain that enables our systems to learn how to solve problems the way a person would.
Presumably there are many other groups working on making AI emulate the brain. How does your approach differ?
There are many approaches to building AI, but most other groups are using artificial neural networks. These neural networks are very coarse approximations of the brain, originally developed in the 1970s and 1980s, and there is a lot of room for improvement. You can think of Vicarious as arbitrage between what the scientific community has learned about the brain in the past 30 years, and what exists today in the machine learning community.
Is more cutting-edge work in AI being done in start-ups than academia?
There is cutting-edge work being done in both settings, although there are advantages to being a start-up with patient capital. Vicarious has a much larger team of researchers working together on a single project than can be found in academia. In academia, there is also pressure to publish within nine months or so of getting a grant – during which time you’re also expected to teach. That makes it difficult to step back and try out an approach that’s a significant departure from the status quo.
Of course, there is time pressure in industry, too. Most venture capitalists only fund start-ups for 12 months at a time. We’re fortunate to have the freedom to take a 10-plus-year time horizon. That helps to explain why we’re unusual in being a pure AI research start-up. Most AI start-ups are focused on pursuing particular applications with minor modifications to existing techniques.
In late 2013 you were in the news for solving captchas, those images with text that are supposed to be readable for humans but not for computers. How much can you say about what you’ve been working on since?
Captchas show how difficult perception is for computers. Even just distinguishing between a blurry A and a blurry B, which we humans can generally do easily, has been a tough problem for the research community to solve. We were attracted to it as a kind of visual Turing test, to check if we had the right hypotheses about brain-like visual perception. Since captcha, we’ve been working on integrating colour, lighting, texture, motion, motor actions and concept formation.
We believe that perception is the gateway to higher reasoning. What seems like abstract thought is often stimulated by perceptual ability. Suppose I tell you “John has hammered a nail into the wall,” then ask you “is the nail horizontal or vertical?” It might seem at first glance like a logic problem, but actually your ability to answer it comes from the mental image you’ve imagined of John hammering in the nail.
The better we can make AI at perception, the better we will be able to make it at higher-level reasoning, which has applications in a lot of different fields.
There has been much discussion about the possibility of a technological singularity, in which AI one day learns how to improve itself and rapidly exceeds human powers of comprehension. Is that a prospect you foresee?
No great thing is ever created suddenly, and this probably truer for AI than most technologies. I don’t think the advances in artificial intelligence will catch the industry by surprise.
As for the idea that there’ll come a day when advances happen at a pace that seem like magic, you can argue that we’re already there – things like Siri and self-driving cars would have looked magical not so many years ago. You can write sci-fi stories about AI rapidly becoming self-aware, but for now those are just stories. A lot more research needs to be done on the fundamentals of how to build intelligent machines before this is a real possibility.
So you don’t share the concerns that have grabbed headlines recently about the potential for AI to pose an existential threat?
It’s good that we have scientists studying possible dangers of AI, just as it’s good that we have scientists studying asteroids that might one day hit the earth or viruses that might mutate into deadly epidemics. But I don’t think AI research should be singled out for the frequency and fervour of concern it seems to be inspiring lately. There are a lot of important safety-related problems to be solved today, and AI is just a small part of that.
Full details on all of the Technology Pioneers 2015 can be found here.
Author: D. Scott Phoenix, Co-founder, Vicarious, a World Economic Forum Technology Pioneer.
Image: A waitress places dishes on a tray carried by a robot couple at a restaurant in Jinhua, Zhejiang province, China, May 18, 2015. REUTERS/Stringer
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Emerging Technologies
The Agenda Weekly
A weekly update of the most important issues driving the global agenda
You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.
More on Emerging TechnologiesSee all
Nigel Vaz
December 12, 2024