Will we soon be talking to our vacuum cleaners?
This article is published in collaboration with Quartz.
In the future, science fiction tells us that we’ll be able to hold complex conversations with our starship’s computer as easily as we would our robot crew-mates or even other humans. But before we reach that point, we’ll have to make do by chatting with fridges and vacuum cleaners.
This week, several companies announced developer programs to make it easier to create software that can understand what people are saying, and respond accordingly. Microsoft, Nuance—which supposedly provides one of the tools behind Apple’s Siri—and SoundHound, the company behind the eponymous music recognition app, unveiled ways to make every app and smart device easier to interact with.
As more developers are able to incorporate voice recognition into their software, the more data those systems will be able to use to improve their understanding of language. And so it might not be too long before you’re able to have an intelligent conversation with your toaster about how browned you want your bread tomorrow morning.
Listen and learn
Nuance’s new developer platform, called Mix, launched today (Dec. 15). It’s designed to be a simple toolkit that developers can use to set up a voice recognition and understanding app in minutes. Nuance showed Quartz a program it created to talk to a virtual robot that’s been tasked with finding a cat. You can ask it to look under the couch, and follow up with, “What about behind the curtains?” Nuance’s language understanding API allows the program to derive context for the follow-up question without needing to be told again about the missing cat.
The idea is to allow developers to create something that more adequately reflects how people would talk to other people, and apply that to smart devices, like a Nest thermostat, or even emotional household robots. Nuance wouldn’t reveal pricing details, but said there would be a free tier for developers to test it out.
The company is also partnering with others to provide additional data that developers can use. They’ll be able to pull in hotel and flight booking information from Expedia, weather data from AccuWeather, sports scores, exchange rates, stock prices, and other things people are generally interested in. Soon, you might be able to ask every device in your home the sorts of questions you might have for Amazon Echo.
To keep up with the Agenda subscribe to our weekly newsletter.
Author: Mike Murphy is a reporter at Quartz, covering technology.
Image: Raytron’s communication robot “Chapit” makes a facial expression in response to voices, using its voice recognition function. Chapit possesses automatic speech recognition and speech synthesis functions with which it can select several suitable words from a speech database and compose an arbitrary speech. With only speech commands, it can control home electronics facilities, the company said. REUTERS/Kim Kyung-Hoon.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Fourth Industrial Revolution
Forum Stories newsletter
Bringing you weekly curated insights and analysis on the global issues that matter.