Fourth Industrial Revolution

AI weeds: what they are, how they could choke off the internet

Farmer Edward Ford walks past dead weeds on his arable farm where he relies heavily on the use of the weedkiller glyphosate, in Brentwood, Britain November 7, 2017. REUTERS/Mary Turner - RC13CC396540

Dark future? AI "weeds" are low-level algorithms gone rogue Image: REUTERS/Mary Turner

Joëlle Jenny
Fellow, Weatherhead Center for International Affairs, Harvard university

The 2018 edition of the World Economic Forum’s Global Risks Report paints a stark picture of the year ahead, depicting a world of rising nationalism, destabilising geopolitical power shifts and heightened risks, notably in the cyber domain.

The report comes with ten “Future Shock” scenarios. They warn against complacency and serve as a reminder that risks can crystallize with disorienting speed: “In a world of complex and interconnected systems, feedback loops, threshold effects and cascading disruptions can lead to sudden and dramatic breakdowns”.

One of these scenarios, “A Tangled Web”, is deceptively simple: it imagines the proliferation of Artificial Intelligence “weeds” that take over the internet and slowly choke it off. These weeds are low-level algorithms gone rogue, that gradually spread throughout the cyber infrastructure, capturing ever expanding space and energy.

In his 1925 poem “The Hollow Men” T.S. Eliot wrote that the world ends not with a bang, but a whimper. The good news is, AI weeds will not bring the world to an end either. These are not Terminator robots. But in the spirit of Eliot’s poem, they illustrate that it is not catastrophic events that we should fear most, but the slow and steady erosion of the common goods that we have come to take for granted.

A severely disrupted web would undermine global trade and the promises of the data revolution. It would cripple knowledge transfers. It could accelerate the trend toward the “balkanization” of the internet as governments erect major firewalls. Imagine what would happen if it infiltrated health implants, air traffic control networks and the operating systems of nuclear plants, grinding all operations to a halt and leading to constant computer reboots. Not the end of the world for sure, and there might even be some associated benefits, but the world as we have come to rely on it would be significantly altered.

The roots of weeds

How would an algorithm capable of choking the global cyber infrastructure come into being? The shocking thing is that it would not even require malicious intent (which is not to say that malicious intent will not be an issue as well - undoubtedly nefarious actors will seek to weaponize AI, and we should prepare for that too). What the “Tangled Web” postulates is that intelligent algorithms are bound to one day evolve out of control, and that we had better prepare for it.

At first, we will be unconcerned to see the development of “general purpose” algorithms, of the sort that already exist to optimize computer processes and to clean up systems. As machine learning progresses, these algorithms will inevitably be given growing autonomy to operate across a range of domains, for example to optimize data flows in an integrated supply chain. They will have to be given the ability to hide: how else could they operate without interfering with antivirus software? They will also be given the ability to make copies of themselves, just as Trojan viruses already do, and to move autonomously to access complex data sets. Inevitably they will be granted the ability to modify their own subroutines and that of their environment in order to achieve their goals.

So far, this is little different from current machine learning applications. But combine these features, and what has in effect been created is a complex adaptive system: a system made of autonomous agents that replicate themselves, often with variations, leading to self-sustaining evolution – in other words, a system that mimics life’s powerful survival instincts.

Suddenly, our computers and all our connected devices will be full of algorithms that, like weeds, will adapt and move swiftly, seeking new ways to harvest energy and capture living space. They will be capable of learning and sharing those lessons with other algorithms. They will discover and occupy spaces from which we cannot extirpate them. More and more memory and bandwidth will be clogged. The more established they will become, like weeds on an old building, the more damage they will do.

This is different from genetic algorithms, even if the latter also apply evolutionary principles. Genetic algorithms are domain specific, tightly constrained and directed – a far cry from these autonomous AI “weeds”.

It will not take many coding errors or forecasting failures before we have autonomous systems wandering around the internet. With recent advances in machine learning, with computers that can write code, and with the emergence of transfer learning that can network thousands of machines, we may be only steps away. Already, computers routinely do predictive analytics, or speculative execution; more autonomy is only a logical development.

Whether it is through the emergence of algorithmic weeds, or through any other global contagion of the cyber infrastructure, the “Tangled Web” is an invitation to recognise the systemic threats to the global cyber infrastructure, and to ask ourselves how we will mitigate them. The bottom line of this story is that the potential risks that come with machine learning and Artificial Intelligence aren’t someone else’s problem: very soon, they will be ours.

What should we do about it?

Many experts assess that artificial Intelligence will change the future in ways we simply cannot imagine, and at a speed far greater than anything we have experienced before. This will bring new opportunities, but is also bound to create disruptions. We cannot eradicate risks and crises: they are part of what drives innovation; in small, managed increments they actually help increase resilience. So we must assume that crises and severe disruptions will happen, and work toward making our digital infrastructure less vulnerable on the one hand, and better able to recover from crises on the other hand. That means looking at the whole socio-technical system of people, relationships and technologies, but also values, assumptions and established procedures, that connect and regulate the relationships between people, technology and the broader environment.

Two sets of issues stand out: first, how to address the vulnerabilities of the cyber infrastructure; and second, how to build the necessary safeguards into AI.

What is fascinating about how the internet - and from there, the whole world of connected devices - grew to be so insecure is the extent to which it happened as a result of a lack of anticipation of what could go wrong. As one of its founders said, in designing the network, “most of what we did was in response to issues as opposed to in anticipation of issues.” Security, as is often said, was mostly an afterthought while everyone focused on the promises of networked communication.

The problem is compounded by the current business model of the digital economy, which has created a highly vulnerable global cyber infrastructure. The incentives favour bringing products to market as quickly and cheaply as possible. The result is sloppy code, programmes full of errors that are rolled out without proper checks, and purposefully unintelligible users licensing agreements that reduce consumers’ ability to sue and thus to hold companies to account. This leaves us exposed to anything from cyber criminality to foreign interference to catastrophic failure.

It is easy to make a long list of what should happen. We need to get better at applying basic cyber hygiene on our computers. We need products that meet higher safety and ethical standards. We need testing and certification regimes. We need to wean ourselves off our dependency on free access to digital content and cheap digital appliances, and to start paying for products and services that offer higher quality standards. The list goes on.

But as any other policy area has demonstrated, there is a big gap between what “should” happen, and what actually does. Ultimately, if we want to transform the current “tragedy of the commons” – a situation in which nobody is willing to invest to protect the common good - into one in which millions of individual decisions independently contribute to making the system more, rather than less robust, it will take a serious rethink of how we set business and behaviour incentives.

Legislation and incentives

Like it or not, one such lever is better legislation, as increasingly called for by leading security experts.

Regulation has many drawbacks and is often resisted for good reason. Ill designed, it can slow down innovation – potentially to the benefit of competing countries. It generally lags behind technological innovation, and it can create distorting incentives. Soft governance, such as codes of ethics and other voluntary mechanisms, are largely preferred. But let’s face it: without the stick of hefty fines or coercive action, self-policing will simply not happen, and certainly not at the scale needed to address the chronic vulnerabilities of our cyber infrastructure.

So the questions become: what combination of regulation and soft law governance do we need? How do we increase criminal accountability? What forms of liability insurance could best create the necessary incentives?

As we devise hard and soft regulatory incentives to address cyber vulnerabilities, we need to anticipate future technology and its role in shaping our cyber ecosystem. For ultimately, we will also need to rely on technology itself to manage the delicate trade-offs between usability, security and effectiveness. Human expertise is simply not scalable to match the speed and complexity of the challenge. Increasingly we will be reliant on designing intelligent networks that can take autonomous decisions to help our systems prevent and recover from failure.

Which leads us to the second point: how much autonomy are we willing to relinquish to “intelligent” machines, and how will we hold algorithms accountable when serious accidents happen? As we become more reliant on code that writes code, we will very quickly lose our ability to track how algorithms reach decisions. What happens when a clash appears between the code we have created and our values and interests? Would we really have more control over proliferating algorithms than we have over invading plants that are proliferating in our gardens?

The kill switch

So when considering the whole body of emerging legislations and soft governance instruments, looking at today’s technology is not enough: we need to anticipate how we will live alongside tomorrow’s technology. How do we address concerns such as bias, privacy, security and explainability in machine-learning algorithms? How do we conceive of criminal accountability in a world in which we might no longer be able to understand how machines make decisions? How can we use liability insurance to set the necessary incentives to put responsibility where it matters? How do we define where the buck stops? What mix of soft and hard governance will ensure that autonomous technology comes with kill-switches to remotely disable it if it gets out of control?

The burgeoning movements that are calling for codes of ethics and for better ways of ensuring transparency in algorithms are encouraging steps, such as emergent discussions at the UN and in other multilateral fora on preventing the weaponization of AI, and on greater investment in research that can help maximize the benefits of AI.

But for these efforts to bear fruits, we will also need some collective ability to monitor systemic risks, and to “join the dots” when seemingly unconnected developments create the conditions for runaway effects that could lead to catastrophic failure. As AI ethicist Wendell Wallace has advocated, comprehensive monitoring of technological innovation facilitates the recognition of key inflection points. The challenge will be to define how.

There is one thing we know for sure: whichever mechanisms we chose, the long-term safety and viability of our digital future is not someone else’s problem; it is our problem, now.

Joelle Jenny is an Associate at Harvard University’s Weatherhead Center for International Affairs. As a senior diplomat she led international negotiations on cyber security, arms control and conflict prevention. She is a member of the Global Forum Council on International Security. She contributed the AI weed scenario “A Tangled Web” to the 2018 Global Risk Report.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Fourth Industrial Revolution

Related topics:
Fourth Industrial RevolutionGlobal RisksCybersecurity
Share:
The Big Picture
Explore and monitor how Fourth Industrial Revolution is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

We asked 5 tech strategy leaders about inclusive, ethical and responsible use of technology. Here's what they said

Daniel Dobrygowski and Bart Valkhof

November 21, 2024

Why is human-first design essential to the future of the internet?

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum