5 lessons from the past for the Fourth Industrial Revolution
Paranoid android? We shouldn't let science fiction steer the debate on new technologies Image: REUTERS/Francois Lenoir - RTX2GKO0
Technologies develop in silos, with little connection and almost no lessons flowing from one to the other. One of the hopes of the Fourth Industrial Revolution, a wave of digital-age innovation, is that it will promote interconnectedness and the cross-fertilisation of ideas, so we won’t continue to make the mistakes of the past.
So what can Artificial Intelligence (AI) learn from Genetically Modified Organisms (GMOs), or robotics from biotech, or gene editing from nanotech?
The way a technology is developed and introduced is a great place to start, so with that in mind, here are five key lessons from the introduction of nanotechnologies:
“We wish we’d spent less time worrying about the ‘ology’ and more trying to figure out what we really had to worry about and what we didn’t. We probably lost ten years because of this.” This is from a campaigner at the NGO Greenpeace talking about his work in nanotechnology governance and highlights one of the problems with the adulation of a particular ‘ology’ for the purposes of scientific kudos and investment.
- Nanotechnology, Synthetic Biology, Biotechnology, Information Technology - these are brands, usually designed to attract funding and investment into academia, as well as being held up by governments keen to show they’re supporting the “next big thing”. Nanotech, which used to just be called chemistry or materials science, created a whole brand around extreme novelty. Synthetic Biology used to be Systems Biology, much less sexy for funders. Since it began to be called “GM on Steroids”, it’s falling under the simpler banner of gene editing.
- The creation of the “ology” of nanotech led to discussions of risk that were driven by the brand of nanotechnology, and not the science behind how new materials actually behave or the potential hazards and risks they present. As a result, progress is in likelihood slower than it could have been. For instance, nanomaterials are conveniently defined as having features smaller than 100 nanometers, but this size-specific cut-off that’s so important to “brand nano” is a remarkably poor predictor of whether something will present a risk or not.
The way funding works is that in order to get the money, scientists and businesses have to massively exaggerate the potential benefit of their ‘ology' - ‘an end to hunger’ ‘electricity too cheap to meter’, ‘the end of disease’ - the media love it, funders get excited and the money flows.
But this “economy of promises” is just another form of fake news, with potentially damaging repercussions:
- The short-term reality can’t possibly live up to the hype and the trustworthiness of the technology is tarnished. For example, the failure of the US National Cancer Institute’s 2004 aspiration to eliminate death and suffering from cancer with nanotechnology by 2015 might have left many a little disappointed.
- Regulators have to start early to consider legislation around the risks and hazards of a new technology. The only place they can start is with what scientists and businesses say they will deliver. Those developing regulation in the life sciences have also found, with hindsight, that this focus is problematic. The consideration of nano-related risks and regulation driven by the hype, compounded by the thrall of the ‘ology’, was confusing and distracted from exploration of the reality of risks and hazards.
- So what is the obsession with creating life-like robots, linked to the hype about ‘The Singularity’ where within 30 years robots and people will merge and we all become ‘post-human’, doing to our ability to develop robotics for useful purposes? What will be the result of the hype about the precision of CRISPR and gene editing be when, it becomes clear it is not a panacea for every genetic disease? Are we wasting time debating the ethics of science fiction when we could be discussing the not inconsiderable impact of the reality.
New technologies, naturally, need new names, new metaphors to explain them and examples to demonstrate where they might lead and what problems they might solve. But the language chosen itself has repercussions.
- Metaphors matter - many technologies use military, engineering or IT-based metaphors of control, dominance over nature or scientific precision which don’t reflect reality, certainly in the early days. But this love of the macho, domineering metaphor brings with it unsettling comparisons and is not shared by everyone. Descriptions such as ‘post-human’ or ‘bio-hybrid human,’ when used to refer to someone with a prosthetic limb, or a new type of artificial heart, may be fun for the sci fi buffs in robotics who use it blithely, but might not do much for the patient going through a traumatic experience.
- It may even have financial repercussions, with the language of IT in the life sciences sending tempting signals to investors that this technology is going to deliver the quick wins and near term profits they have become rather fond of in ICT, when the reality is far from that.
- In Europe it’s all about GMOs. Scientists, businesses and policy-makers are all worried sick about the sort of public backlash which affected the introduction of agricultural genetic modification. But public consultation about nanotech and other technologies shows that society doesn’t have a widespread fear about technology at all. On the whole, people are supportive, but want to know it’s being used for beneficial reasons without causing more problems than are solved.
- This doesn’t stop scientists, business and policy-makers thinking that any mention of technology will result in placards and boycotts. But instead of engaging early and collaboratively they get defensive and confrontational, which in fact builds just the sort of distrust they are trying to avoid.
- Business initially embraced nano, with the term used as a selling tool on packaging and in advertising. Consumers bought, and continue to buy, products which do this. (Labelling of nano on cosmetics in the EU in 2013 has had no effect whatever on sales, it appears). However, gradually, “nanophobia phobia” took over. Nano was taken off packs and company websites for fear of a backlash, despite quite positive evidence from many public dialogues about public views of the technology in use, and NGOs engaging constructively on governance.
- A consultation for the German government’s NanoKomission found that “most of the aversion to nano seems to be among groups of experts, academics, and not the public. Even NGOs are in some areas more accepting than businesses”. When consumers in Germany were asked why they thought companies might have removed the references to nano, they concluded: “The nano bit wasn’t working so it was not in the product any more or, or there never was any nano and they had to take it off the label, or it was something for the future, but it wasn’t in the product now. ” Not that a nano component was still used in the product but the companies didn’t wish to talk about it. An NGO wryly observed “…companies are so afraid and draw back from using the materials or labelling them. So we either lose out on innovation or we have companies who are breaking the law.”
- Communication matters for trustworthiness: secrecy and avoidance are penalised.
Our increased access to information - fake or otherwise - often makes it difficult to see where real evidence of potential benefit or harm lies. It’s human nature, then, to fall back on our pre-conceived ideas to make sense of the information. Could the sheer quantity of conflicting information, coupled with the human propensity to cherry-pick data to prove our point, cloud our vision about the benefits and also make us miss early warnings about genuine problems with new technologies?
- For example, some feel strongly that "speculative harms are treated as fear-mongering while speculative benefits are allowed to run wild," while others consider that "…the regulatory environment at EU-level has become increasingly characterised by risk avoidance rather than risk acceptance and a preference for social concern rather than science when making risk management decisions." Both are probably true, but as the debate becomes polarised constructive action becomes more difficult.
- But it is important that as a society we do debate the potential benefits and the acceptable risks of technology thoughtfully and wisely. The history of innovation shows us that, whilst we are brilliant and inventive, for "every act of creation and innovation there exists the potential, also, for our undoing”.
- The European Environment Agency’s Late Lessons from Early Warnings reports show that time and again the early warnings of disaster are clear and well sign-posted, but for various systemic and behavioural reasons we don’t act on them in time to prevent serious harm. It took nearly 100 years to respond to clear signs of the dangers of asbestos, as our cognitive biases held us back. However it did lead to a greater concern about the toxicology of nano particles so lessons were learned.
- Will the reaction to both the hype and the potential harms of new technologies conversely result in lost benefits and promise not fulfilled? That is possible too.
The hopes and fears of many rest on technologies of Fourth Industrial Revolution. Let’s ditch the "post-truth" approach to innovation, learn the lessons of the past and create solutions which deliver empowerment and prosperity for us all.
This was produced by Hilary Sutcliffe of SocietyInside in consultation with some members of the Global Agenda Council on Nanotechnology. She now serves on the Global Future Council on Human Rights.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Fourth Industrial Revolution
Related topics:
The Agenda Weekly
A weekly update of the most important issues driving the global agenda
You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.
More on Fourth Industrial RevolutionSee all
Daniel Dobrygowski and Bart Valkhof
November 21, 2024