How close are we to X-ray vision in the home?
This article is published in collaboration with MIT.
A team of researchers at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) has long believed that wireless signals like WiFi can be used to see things that are invisible to the naked eye.
Since 2013, CSAIL researchers have been developing technologies that use wireless signals to track human motion. The team has shown that it can detect gestures and body movements as subtle as the rise and fall of a person’s chest from the other side of a house, allowing a mother to monitor a baby’s breathing or a firefighter to determine if there are survivors inside a burning building.
Next up? Seeing a person’s silhouette and even distinguishing between individuals.
In a paper accepted to the SIGGRAPH Asia conference taking place next month, the team presents a new technology called RF Capture that picks up wireless reflections off the human body to see the silhouette of a human standing behind a wall.
By tracking the silhouette, the device can trace a person’s hand as he writes in the air and even distinguish between 15 different people through a wall with nearly 90 percent accuracy.
In other words, from the opposite side of a building, RF Capture can determine where you are, who you are, and even which hand you are moving.
From heating bills to Hollywood
Researchers say the technology could have major implications for everything from gaming and filmmaking to emergency response and eldercare.
Take, for example, motion capture in movie production: “Today actors have to wear markers on their bodies and move in a specific room full of cameras,” says PhD student Fadel Adib, who is lead author on the new paper. “RF Capture would enable motion capture without body sensors and could track actors’ movements even if they are behind furniture or walls.”
The device’s motion-capturing technology makes it equally valuable for smart homes, according to MIT professor and paper co-author Dina Katabi.
“We’re working to turn this technology into an in-home device that can call 911 if it detects that a family member has fallen unconscious,” says Katabi, director of the Wireless@MIT center. “You could also imagine it being used to operate your lights and TVs, or to adjust your heating by monitoring where you are in the house.”
Future versions could be integrated into gaming interfaces, allowing you to interact with a game from different rooms or even trigger distinct actions based on which hand you move.
“The possibilities are vast,” says Adib, whose other co-authors include MIT professor Frédo Durand, PhD student Chen-Yu Hsu, and undergraduate intern Hongzi Mao. “We’re just at the beginning of thinking about the different ways to use these technologies.”
How it works
The device works by transmitting wireless signals that traverse the wall and reflect off a person’s body back to the device. (The emitted radiation is approximately 1/10,000 the amount given off by a standard cellphone.) The device captures these reflections and analyzes them in order to see the person’s silhouette.
The key challenge, however, is that different individuals — and, for that matter, different body parts — all reflect the same signal. Which raises the question: How can we analyze these reflections to distinguish between different limbs, much less entire humans?
“The data you get back from these reflections are very minimal,” says Katabi. “However, we can extract meaningful signals through a series of algorithms we developed that minimize the random noise produced by the reflections.”
The technology operates in two stages: First, it scans 3-D space to capture wireless reflections off objects in the environment, including the human body. However, since only a subset of body parts reflect the signal back at any given point in time, the device then monitors how these reflections vary as someone moves in the environment and intelligently stitches the person’s reflections across time to reconstruct his silhouette into a single image.
To differentiate between people, the team then repeatedly tested and trained the device on different subjects, using metrics such as height and body shape to create concrete “silhouette fingerprints” for each person.
Looking ahead
The researchers have clearly already recognized the technology’s commercial potential: Team members are in the process of spinning out a product called Emerald that aims to detect, predict and prevent falls among the elderly. In August the team presented Emerald to President Obama as part of the White House’s first annual Demo Day.
“In the same way that cellphones and WiFi routers have become indispensable parts of today’s households,” says Katabi, “wireless technologies like this will help power the homes of the future.”
Publication does not imply endorsement of views by the World Economic Forum.
To keep up with the Agenda subscribe to our weekly newsletter.
Author: Adam Conner-Simons is a Communications Coordinator at Massachusetts Institute of Technology (MIT).
Image: An attendee tries out Sony’s Morpheus virtual reality headset at the 2014 Electronic Entertainment Expo, known as E3, in Los Angeles, California June 10, 2014. REUTERS/Kevork Djansezian
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Emerging Technologies
The Agenda Weekly
A weekly update of the most important issues driving the global agenda
You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.
More on Emerging TechnologiesSee all
Filipe Beato and Jamie Saunders
November 21, 2024