How a Formula 1 pit team could help your doctor treat you better
Gonzalo Viña looks at the potential of big data to change healthcare. Image: REUTERS/Ahmed Jadallah
When a top Formula One team is using pit stop data-gathering technology to help a drugmaker improve the way it makes ventilators for asthma sufferers, there can be few doubts that big data are transforming pharmaceutical and healthcare systems.
GlaxoSmithKline employs online technology and a data algorithm developed by F1’s elite McLaren Applied Technologies team to minimise the risk of leakage from its best-selling Ventolin (salbutamol) bronchodilator drug.
Using multiple sensors and hundreds of thousands of readings, the potential for leakage is coming down to “close to zero”, says Brian Neill, diagnostics director in GSK’s programme and risk management division.
This apparently unlikely venture for McLaren, known more as the team of such star drivers as Fernando Alonso and Jenson Button, extends beyond the work it does with GSK. It has partnered with Birmingham Children’s hospital in a £1.8m project utilising McLaren’s expertise in analysing data during a motor race to collect such information from patients as their heart and breathing rates and oxygen levels. Imperial College London, meanwhile, is making use of F1 sensor technology to detect neurological dysfunction.
Few people would contest that big data are going through a period of explosive growth, yet it is anyone’s guess what that will amount to. In healthcare, one measure, by the McKinsey Global Institute in 2013, estimated that making greater use of big data could soon be worth some $100bn annually across the US healthcare system. Another, in PLOS Biology, the US Public Library of Science journal, forecast that data generated by genomics alone will be on a par with that generated by astronomical science, YouTube and Twitter by 2025.
Big data analysis is already helping to reshape sales and marketing within the pharmaceuticals business. Great potential, however, lies in its ability to fine tune research and clinical trials, as well as providing new measurement capabilities for doctors, insurers and regulators and even patients themselves. Its applications seem infinite.
BC Platforms, a Swiss-Finnish company that manages clinical and genomic data with its own analytics platforms for academics, healthcare providers and life science companies, recently signed an agreement with Microsoft Azure and Codigo46 of Mexico to create the largest biobank in Latin America. It aims to take genomic data from 1m people over the next three years.
Tero Silvola, BC’s chief executive, says the problem is in “making sense” of the future biobank’s 100m data points, as well as those of the other 19 biobanks around the world, if data are to support drug companies in their quest for ever more personalised medicine.
Stephen Cleaver, head of informatics systems at Novartis Institutes for Biomedical Research in Cambridge, Massachusetts, says he sees “almost exponential growth” in gene sequencing in the years ahead. This will be helped in no small measure, he adds, by continued falls in data storage costs and improving computing power.
“We are doing stuff today we couldn’t even dream of five years ago,” he adds. “Our work is becoming increasingly data driven. We are now taking a direction towards deep learning, which is a subfield of artificial intelligence, in which we will be able to detect and understand hidden patterns in these huge data sets.”
Given such possibilities, the Boston-based biotech company PureTech says its adaptive computer game, Akili, provides “statistically significant improvement” in trials with children suffering attention deficit disorder. Meanwhile, Isabel Torres, the Japanese pharmaceutical company Takeda’s head of access to medicines, says it uses data readings from mobile phones to deploy mobile health clinics in some of the poorest parts of Kenya.
All such excitements aside, handling the personal details of millions of people creates huge data quality, privacy and security problems. Doug Given, director of Health2047, a San Francisco-based health systems consultancy, says much of the data gathered to date will be of limited use for healthcare providers.
“The risk is in big bad data,” he says. “Take BMI [body mass index] data. We don’t know how it was measured. Did people have their clothes on?”
Mr Given adds: “Also, data gathered 10 years ago by a brain scan is infinitely less detailed than what you would get today. There is a real issue around quality.”
The OECD last year said governments needed better data governance rules given the “high variability” among OECD countries about protecting patient privacy. Recently, DeepMind, the artificial intelligence company owned by Google, signed a deal with a UK NHS trust to process, via a mobile app, medical data relating to 1.6m patients. Privacy advocates say this as “worrying”. Julia Powles, a University of Cambridge technology law expert, asks if the company is being given “a free pass” on the back of “unproven promises of efficiency and innovation”.
Brian Hengesbaugh, partner at law firm Baker & McKenzie in Chicago, says the process of solving such problems remains “under-developed”.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Data Science
The Agenda Weekly
A weekly update of the most important issues driving the global agenda
You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.
More on Health and Healthcare SystemsSee all
Jayasree K. Iyer
November 19, 2024