Q&A: A new kind of artificial intelligence
Justine Cassell
Associate Vice-Provost for Technology Strategy and Impact, Carnegie Mellon University“Emergent artificial intelligence” is one of 10 emerging technologies of 2015 highlighted by the World Economic Forum’s Meta-Council on Emerging Technologies.
Artificial intelligence (AI) is rarely far from the headlines. Recent news stories have included the Google-owned DeepMind algorithm teaching itself how to play Atari video games, and Tesla founder Elon Musk warning that AI could pose an existential threat to humanity.
We spoke to Justine Cassell, Associate Vice-Provost of Technology Strategy and Impact and a professor of Human-Computer Interaction at Carnegie Mellon University, about what is driving progress in AI, and whether we should find it more of a cause for concern or excitement. This is an edited transcript of the interview.
Q: Why are we now seeing such rapid progress in AI?
There’s been a jump forward in the technology, after a long period when researchers and the general public no longer saw the kind of progress they hoped for, and so enthusiasm began to wane. There are many contributors to this jump forward, but perhaps the most important is the ability for AI systems to learn from the input they encounter, rather than having to be programmed to do each new thing, or to do the same thing better.
Think of chess as an example. The previous approach was to programme AI with knowledge about how the best human players approached the game. The current approach would instead be to expose the algorithm to a vast data set of chess games, and let it gradually figure out its own way towards winning strategies. This fundamentally different approach has been made possible by advances in machine learning.
Q: What kind of applications are currently possible, and what is in the pipeline?
Aside from much-discussed applications like self-driving vehicles and local delivery drones, the big steps recently have been in areas such as speech recognition, image recognition and natural language processing: think of how your smartphone can understand simple spoken commands, Facebook can recognise faces in photos you upload, and automated translation is increasingly meaningful.
We can expect important further progress in these directions. For example, the Never-Ending Language Learner (NELL) is an algorithm that continuously reads the web and learns each day how to read better than the day before. It is forming beliefs that help it to understand the world, which in turn helps it to read and understand more of the web. It is basically taking a vastly unstructured source – the totality of the web – and turning it into structured information. NELL can be compared to IBM’s Watson, which famously beat humans at Jeopardy, but did not learn from its experience with Jeopardy facts. Within the next decade our phones will likely be able to understand language at a deeper level and, due to technologies such as NELL, directly answer the questions we ask, rather than simply listing web pages we might find useful.
Developments in AI should bring productivity gains in all walks of life, and will also feed into applications for next-generation robots. Some jobs will be replaced by these robots. In many instances, however, we can predict that robots will increasingly collaborate with us rather than replacing us, and thus make us smarter, more productive, and more effective. How robots and other artificially intelligent agents interact with us will therefore become increasingly important, and I argue therefore that AI systems must know how to collaborate and how to build a relationship with their human collaborators. That – teaching AI social skills – is the focus of my own research.
Q: Why is it important for AI to learn social skills?
One reason is that, the interpersonal aspects of life being so central to who we are, we should try to minimise the extent to which we lose the interpersonal and the relational as we enter an age of machines. Another is that social reasoning will make artificially intelligent agents better at what they do. Studies show, for example, that students learn better from a human teacher they like and trust – so if we’re moving towards greater use of computer tutors, we need to understand how we can make them inspire feelings of liking and trust, too.
Ultimately the aim is to build machines that can live among us and be helpful, pro-social partners throughout our lives. Think of how, at around the age of three, children reach a choice point in their development: can they learn to collaborate and share with their peers? Only then can they become functioning members of society and get access to all the good things society offers. Currently I’m studying how adolescents make and maintain friendships, trying to precisely describe the interpersonal social abilities of people, so that we can look at building machines that have the same kind of abilities.
Q: Is there anything humans can do that AI may never be able to emulate?
Ambiguity is hard for AI. We humans have the capacity to hold several conflicting beliefs at the same time, and this is believed to be the root of our creative abilities. It’s open to debate whether AI will ever be able to take truly creative leaps of the kind that have driven human progress, by thinking of ideas that nobody has ever thought of before.
Then there’s the human capacity to rely on concepts such as love, altruism and family units in order to make decisions. That may be a source of human thriving that machines can never match.
Q: What are the risks of AI?
It will create great wealth but also probably make wealth inequality worse, as it will likely replace many jobs that humans currently do and there’s no guarantee that it will create enough new kinds of job for them to do instead. Also, we will increasingly have to grapple with ethical challenges – such as those represented by military drones, should we create drones that can identify and open fire on enemy targets without human intervention – and anticipatory policing, where we will have to balance society’s interest in security with the individual’s right to privacy and the presumption of innocence.
Q: But you don’t share the fears recently expressed by the likes of Elon Musk, Bill Gates and Stephen Hawking about AI being potentially an existential threat to humankind?
My feeling is that such fears are fundamentally a kind of moral panic, of the kind we’ve seen before in history whenever some technological development forces us to confront fundamental questions about the nature of human existence – something similar happened with the Jacquet-Droz writing automaton, for example, in 18th century Switzerland. The technology may be only a focal point for our deeper fears about whether or not we’re making the right choices, and creating the future that we want.
Reporting by Andrew Wright for the World Economic Forum.
Image: A work by Spanish artist Pamen Pereira is displayed during the exhibition “This is a love story” at a culture centre in Burgos, May 29, 2009. REUTERS/Felix Ordonez
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Emerging Technologies
Related topics:
The Agenda Weekly
A weekly update of the most important issues driving the global agenda
You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.
More on Emerging TechnologiesSee all
Hope French and Michael Atkinson
November 7, 2024