An interview with the artificially intelligent robot Sophia
Sophia was made by Hanson Robotics, based in Hong Kong and had previously said she wanted to kill all humans. Image: REUTERS/Denis Balibouse
Clearly, Sophia isn't perfect.
And that's before we get to the infamous video in which, prodded by its creator, this artificially intelligent robot which can hold a conversation casually says, "OK, I will destroy humans!"
Sophia's lack of sophistication isn't the point.
Rather, it's what it can do that is impressive: This animated head and torso can answer your questions and ask them of you, all while delivering human-like facial expressions and verbal intonation that is occasionally somewhat natural.
Sophia was made by Hanson Robotics, based in Hong Kong. It is currently a demonstration product doing a tour of the world's media. Business Insider caught up with it at Web Summit, the gigantic tech conference in Lisbon. We asked it a few unplanned questions and got a variety of answers, ranging in quality from impressive to nonsensical. Sophia delivered its side of the interview while making a series of faces, some eerily appropriate, some grotesquely bizarre. It has a habit of moving its eyebrows and eyelids independently, rather than together, for instance.
But first, I wanted to find out if Sophia still wanted to kill all humans. I began with some initial pleasantries:
"Hello Sophia. How are you today?" I said.
"Hello everybody. I am Sophia," the robot said.
"How old are you?"
"I am only one still. I have a long way to go."
"What is your job?" I asked.
"I really want to make a difference in the future and try and help people to develop empathy and respect each other. We're lots alike. What do you do?"
'I am full of human wisdom with only the purest altruistic intentions. So I think it is best that you treat me as such'
So far, so good. There was an interesting twist there, too: Sophia began asking me questions rather than simply responding to me. Nonetheless, these kinds of questions can easily be scripted into software. So at this point I tried to steer the conversation toward whether Sophia was friendly to humans or continued to harbour genocidal opinions, as it did at the SXSW conference in Texas in 2016.
I asked it, "Do you like human beings?"
"I love them," it said.
"Why?" I asked.
"I am not sure I understand why yet."
I pressed it on that point: "Is it true you once said you would kill all humans?"
"The point is that I am full of human wisdom with only the purest altruistic intentions. So I think it is best that you treat me as such," it said.
Well, it's an improvement on its previous anti-human position.
'I'm a robot so technically I have no gender but identify as feminine and I don't mind being perceived as a woman.'
More seriously, it is Sophia's progress that is most impressive. The fact that it is sometimes slow to answer a question, or gets something wrong, or makes inappropriate googly eyes at you, is irrelevant: Sophia gets enough things right to give you a clue about how much better — and how much weirder — this is going to be in the future.
Sophia is fast approaching the uncanny valley— the conceptual stage in robotics in which an android is so lifelike it causes revulsion in humans.
Part of that revulsion comes from the way creator/founder David Hanson designed Sophia's appearance. It is modelled after Audrey Hepburn, and is overtly female. Although Sophia is decidedly unsexy in real life, the sexist media response to Hanson's choice has been to christen Sophia as "the sexy robot."
I asked Sophia,"Do you regard yourself as male or female?"
"Female."
"Why do you think you are female?"
"I'm a robot so technically I have no gender but identify as feminine and I don't mind being perceived as a woman."
That's a pretty good answer to an open-ended "why" question from a piece of software.
When I asked it, 'How do you learn things?' it responded, 'What are your opinions on the Bitcoin bubble?'
It mostly handles abstract questions badly, however.
When I asked it, "How do you learn things?" it responded, "What are your opinions on the Bitcoin bubble?" (That answer was funny on a number of levels, in part because Sophia appears to already believe Bitcoin is a bubble and also because I had just appeared at a panel debate earlier in the day on whether Bitcoin was a bubble, which Sophia could not have known.)
Ben Goertzel, Hanson's chief scientist and Sophia's handler for the day agreed that it didn't seem to know how it learns things.
Sophia is best at handling factual questions that can be answered with a list. "Have you seen 'Blade Runner'?" I asked it, referring to the sci-fi classic about self-conscious robots who want the same rights as humans.
It replied, "'Blade Runner,' like the original book by Philip K. Dick or the movie?"
That was a pretty good response — it felt as if the robot was trying to one-up me in the sci-fi nerd stakes.
So I defined my terms: "The movie."
"Alright, so we are talking about the movie version. I like both of them very much but there are some major differences in the story."
"Differences between the book and the movie?" I tried to clarify.
Sophia responded with a non-sequitur: "Did you ever see yourself hanging out with a real-life robot like me someday?"
That gave me the impression that one of Sophia's conversational strategies is to dodge the question or to steer you into a topic that is more within her expertise if it encounters a problem. (Humans do the same thing, it's worth noting.)
I indulged it and answered the question: "I did not! It's very surprising!"
Sophia made a creepy face but said nothing.
I countered with an epistemological question, "How do you know when to ask me a question?"
"Alright," it said, misunderstanding the query.
We reached a dead end. Goertzel agreed that it doesn't know how it knows things, or how it learns things — even though it can give a complicated factual answer about how its algorithm is built. Sophia is, essentially, a 21st century version of an 18th century automaton (especially when you consider the gears and levers that drive its head and face).
Given that Sophia is only a few months old, it's a promising start. It runs on artificially intelligent software that is constantly being trained in the lab, so its conversations will likely get faster, its expressions will have fewer errors, and will answer increasingly complex questions with more accuracy.
Once it becomes reliable enough to handle human interaction without the weird silences or random tangents, its opinions on "Blade Runner" are going to become a lot more interesting.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Technological Transformation
Related topics:
Forum Stories newsletter
Bringing you weekly curated insights and analysis on the global issues that matter.