What does your phone tell you about yourself?
Justine Cassell
Associate Vice-Provost for Technology Strategy and Impact, Carnegie Mellon UniversityThis blog is part of a series on the Top 10 Emerging Technologies 2014.
The term “quantified self” seems to vary in definition depending on who is using it. What we mean by quantified self in the context of emerging technologies is the use of sensors to recognize people’s behaviour, then interpret it using machine-learning algorithms to understand our underlying psychological and physical states.
Some of these sensors have existed for a long time, but only fairly recently have they become so cheap that people now carry a sophisticated collection of sensors around in their pockets in the form of a cell phone. Meanwhile, machine learning is at a point where combinations of data from those sensors can be used to predict a person’s underlying state, such as depression, fatigue, overheating or any one of a number of other things.
For example, imagine that your great aunt, who is in her 70s, works at a small grocery shop in a village a few miles from where she lives. Her cell phone data shows that every day she leaves home, stops by the local cafe for a cup of tea, and then makes a phone call as she walks a couple of miles to work. After work, she makes a call to set up a visit with a friend, stops at a shop on the way to pick up a cake, visits her friend and then goes home.
As a close relative, you have access to her cell phone records and the data from the sensors on her phone, such as GPS data. Let’s imagine two scenarios: first, one day you notice that she’s only going from home, to work, to home, with an occasional visit to the supermarket, but that’s it. In the second scenario, she doesn’t leave the house at all. What would those situations tell you?
In the first case, her behaviour is characteristic of depression. She goes to work because she still has to earn a living, but she reduces her social contact to virtually none. You might want to check in with her and make sure that she’s okay in terms of her mental state.
In the second case, you would be very worried about her physical state. You would want to check on her quite urgently or ask a neighbour to visit to make sure that she hasn’t fallen.
Access to all of these sources of data allows for machine-learning algorithms that check your aunt’s activity levels against her normal behaviour data and compare those to how the general population acts in both well and unwell situations. Even without your intervention, the cell phone might call you and say: “Check in on your aunt” or “Check in on your aunt right this minute”.
These sensors can also be used to help us to understand ourselves. For example, right now I’m wearing a Jawbone UP. The Jawbone contains one sensor – an accelerometer. It measures how much I move during the day and it measures how much I move at night. It uses this data to infer how many steps I walk during the day, and how long I sleep for and whether for that period I’m in deep sleep or light sleep during the night.
I’m using the Jawbone data to draw my own inferences about my sleep based on my exercise. I know what time I exercise during the day and I know how well I slept. I happen to have read an article saying that if you exercise too late in the day, it keeps you awake. So when I see that I’m not sleeping well, I might try and move my exercise to earlier in the day.
Every morning I look at that chart of when I’m in light sleep and when I’m in deep sleep, and I try and draw patterns from it. The one pattern that is very salient to me is that I am always in a deep sleep from 05.30 to 06.45, unless I’ve been woken up by an alarm during that period. That means that if I set my alarm for 06.15, I’m depriving myself of deep sleep. Instead, I can move my sleep time to go to bed later and wake up later, so that I can get that period of deep sleep.
More and more people are wearing similar devices from Fitbit, Jawbone, Body Media and others, or using software with similar goals on their smart phones. More and more cell phone apps are being released that do things like allow you to enter the food you eat and give you a caloric value, or tell you where there’s a healthy restaurant near where you’re standing. The industry is taking advantage of increased interest, and the increased interest is coming from the wider variety of industry products.
These apps don’t call your doctor. They don’t yell at you. What they do is they show you what you’ve done and give you feedback, perhaps with some positive sounding tips on what you’re capable of. That’s really enough for many people to change their behaviour. So these are what we call objects to think with, objects of self-reflection. The fact of confronting us with our behaviour allows us to change it.
As the analysis of this kind of data gets smarter, it will become more helpful. Sleep data for example, is particularly interesting for the aging, since sleep disruption increases with age. While the number of such systems is increasing, they are not always aimed at individuals. There are apps that gather all kinds of information about us, such as our ad choices, the kind of news we read and so on, which is sent back to the companies behind the apps. One of the goals of the Quantified Self movement is to encourage people be the producers and the consumers of their own data rather than, or maybe in addition to, third parties.
I’m the co-director of a brand new partnership between Yahoo! and Carnegie Mellon University to do massive personalization of mobile devices. The reason that we’re able to do that now is because sensors have gotten so cheap, so many people have smart phones, and because the field of machine learning and the field of human-computer interaction have both advanced in leaps and bounds.
Your mobile device should show you news that you’re interested in and news that you might be interested in, but not news that you’re never going to be interested in. When you link into Yahoo! News, for example, you don’t want to waste your time paging through many news items that aren’t going to interest you, and your mobile device should be able to see what you read in the past and infer from that what you’d like to read in the future.
Since your mobile device also knows where you are, it knows that a story in which you wouldn’t normally be interested will become relevant if it’s happening nearby. And if you have a ticket to go somewhere, then you’re going to want to read news about the place that your calendar says you’re about to travel to.
Increasingly, it is going to be the case that we have access to data about ourselves and that we will increasingly share such data with people who can help us, but we’ll own those data. You can imagine that an insurance company would be very interested in keeping you healthy, so those kinds of apps, as well as a step monitor, might be distributed with your new healthcare insurance. That opens up another topic – your personal medical records.
In the same way that young people have very different attitudes about privacy, I believe we’re going to see less concern about privacy about our personal health. At the same time, there’s going to be more of a savvy consumer attitude about healthcare that is going to lead people to want to be in charge of their own healthcare, and therefore to have their own personal healthcare records.
Likewise, it’s going to be less and less possible for companies to hold onto our data and not let it go. There’s just not going to be tolerance for somebody else owning our data and not giving us access to it in the future.
At the moment, the data we’re talking about will be gathered by sensors in wristbands, smartphones and connected watches, but that will change. An earpiece for the hard of hearing is nothing more than a sensor stuck in our ears, and a contact lens is nothing more than a kind of sensor stuck in our eyes. Increasingly, those sensors are going to move into clothing or into accessories such as glasses, like Google Glass, and from there into our bodies.
The skin is a barrier that, for the moment, we’re unwilling to give up. It’s a frontier that we’re hanging onto, but as people realize that they are already cyborgs in some way – that they are already using assisting devices – I believe that the skin will be less and less of a unique barrier to technology, and the devices that allow us to quantify ourselves will move into the body.
Justine Cassell is Director of the Human Computer Interaction Institute at the Carnegie Mellon University in the US, and a Member of the World Economic Forum’s Global Agenda Council on Emerging Technologies.
Image: A woman takes a picture of herself in Shexian county, China. REUTERS/Stringer
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Innovation
The Agenda Weekly
A weekly update of the most important issues driving the global agenda
You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.