5 strategies to activate your agency and stay relevant in the age of AI
AI's rising importance demands that we take steps to preserve our agency and decision-making capacity. Image: Getty Images/iStockphoto
- As Artificial Intelligence's use cases expand, we face a reality in which human agency erodes and decision-making is instead exported to machines.
- To prevent this from happening, each of us can take steps in our own lives to preserve our agency and decision-making capacity.
- Thinking through the "Complex Five" of knowns and unknowns is a sure-fire way to prepare for the age of AI.
Today, Artificial Intelligence (AI) seems to be the answer to everything, irrespective of the question. AI’s potential for profound benefits comes face-to-face with existential risks. More nuanced than human extinction alone, these risks challenge our values, freedoms, even the trajectory of civilization.
Algorithmic control, a growing shift from human judgment, subtly infiltrates our lives. It influences decisions about everything, from news feeds to job prospects, beliefs to allegiances. This erosion of agency and choice, gradual and often invisible, deserves far more attention than stereotypical doomsday scenarios.
Evolutionary pressure prioritizes relevance. The pressure is on us to make more relevant decisions. But how?
The “Complex Five”: know your unknowns
We must understand the different types of uncertainty to anticipate our future relationships with AI.
Known knowns: Things we know that we know, like “the sun rises in the morning and sets at night.” For these, we use Michele Wucker’s definition of “Gray Rhino.” There is no uncertainty with Gray Rhinos; we might treat them as unknown, but they are certain.
Unknown knowns: Things we think we know, but we find that we don’t understand them when they manifest. For example, increasing ocean temperatures and acidity levels prompted perfect conditions for jellyfish population growth. This increase then forced shutdowns from jellyfish clogs in the cooling systems of nuclear reactors around the world. Here, situations we believe we understand can become complex, as small changes drive larger, less predictable impacts. To describe such unknown knowns, Postnormal Times uses the term “Black Jellyfish.”
Known unknowns: Things we know we don’t know, including new diseases, impacts of climate change and mass human migration. These are obvious, highly likely events, but few acknowledge them. We call these known unknowns “Black Elephants,” based on a term attributed to the Institute for Collapsonomics.
Unknown unknowns: Things that we don’t know that we don’t know. For these unpredictable outliers, we use Nassim Nicholas Taleb’s “Black Swans.”
Butterfly Effects: The flapping wings of one majestic insect brings these animals together. The “Butterfly Effect,” defined by meteorologist Edward Lorenz, describes how small changes can have significant and unpredictable consequences. To illustrate, Lorenz described a butterfly flapping its wings influencing tornado formation elsewhere.
All these degrees of uncertainty share a common trait: ignorance, or absence of evidence, is not evidence of absence.
Responding to AI’s “Complex Five”
The big five in safaris (buffalo, leopards, lions, elephants and rhinos) are dangerous because of their strength and size. In our disruptive world, our Complex Five are the five most important animals in our “hunting” for big disruptors — rhinos, jellyfish, swans, elephants and butterflies.
By filtering the future of AI beyond the extreme dichotomies, we can adapt our responses to the spectrum of uncertainties and develop strategies for staying relevant in the age of AI.
1. Beware of charging rhinos
Imagine Wucker’s Gray Rhino: What probable, visible and high-impact AI outcomes do we ignore, despite the evidence they are already charging at us?
Disinformation: AI-powered deepfakes and misinformation are rapidly growing, threatening facts, democracies and mental health. Brexit, recent US elections, and the Covid pandemic have already demonstrated how social media can dent democracy.
Skills and reskilling: Today, routine cognitive tasks are being automated. Artificial narrow intelligence already has incredible capabilities which excel at precisely defined tasks, outperforming humans in specialized areas. To build skills that machines cannot quickly emulate, we must replace mechanical transfers of knowledge with human-centric capabilities like critical thinking and emotional intelligence.
With Gray Rhinos, responses often fall short because decisions come too late. To avoid being trampled, anticipate the impacts, rather than muddling along and panicking when it’s too late.
2. Don’t get stung by the jellyfish
Black Jellyfish used by Postnormal Times indicate hidden, low probability events that have a high potential impact. While the initial situation may seem predictable, Black Jellyfish grow into something that defies our imagination.
Info-ruption: What will be the cascading effects of information’s disruption? How do weapons of mass disinformation threaten society’s cohesiveness? Info-ruption could be the primary weapon in future wars, determining the future of humanity.
Scaling bias: AI’s wholesale amplification of discrimination through bias can reverberate across society.
Fusion of AI and BioTech: The intersections of AI, biology and technology could challenge the status of humans as dominant beings. What defines sustainable humanity?
Info-ruption is an arms race. As technology amplifies inaccurate information and bias, we must develop equally powerful tools (and mindsets) to fight these.
To respond to AI’s Black Jellyfish, we need to consider snowballing effects by asking how these reverberations could cascade further, and even become irreversible. Ask “What if this expanded larger than expected?” and “What else might this impact?”
3. Address the elephant in the room
Black Elephants are obvious threats, but few are willing to acknowledge them. These are similar to Gray Rhinos, but for now the elephant is standing, versus the imminent charging rhino. When Black Elephants are discussed, conflicting views translate into confusion and inaction.
Reinventing education: Our knowledge-driven education models will produce a massive number of people who won’t keep up in our nonlinear, ever-changing world. The current AI debate neglects the critical need to reimagine education. AI threatens not by its existence, but by our education systems failing to adapt.
Deskilling decision-making: As we delegate to AI systems, our decision-making capacities erode, causing us to lose the habit of making decisions ourselves. As algorithms increasingly impose their decisions on us, we lose opportunities to exercise agency.
Black Elephants require mobilizing action, aligning stakeholders and understanding the changes throughout our complex systems. In specific situations, own your response. Don’t let Black Elephants blindside you, or they will morph into Gray Rhinos and charge.
The era of techistentialism: reinstating agency
Today, humanity faces both technological and existential conditions that can no longer be separated. We define this phenomenon as “Techistentialism.”
Through AI, technology is challenging us in strategic decision-making, a realm historically specific to humans. Here, technology confronts the existential dimension, as we stand on the edge of our free will.
Jean-Paul Sartre powerfully articulated the human condition: “existence precedes essence,” whereby our agency emerges through choice. But if technology is determining outcomes on our behalf, our agency is curtailed and our choices may be beyond our control.
Techistentialism is our attempt to apply this philosophical perspective to sense-making and decision-making in our contemporary technocratic environment.
Machines don’t need to become superintelligent in order to challenge us. The issue at hand is a question of understanding the nature of our own capabilities in relation to a machine’s computational rationality.
We should not underestimate the severity of deskilling. By delegating our decision-making capabilities to algorithms, reliance may slip into dependence.
The true existential risk is not machines taking over the world, but the opposite, where humans start operating like idle machines — unable to connect the emerging dots of today’s complex world.
Reinventing education, from the playground to the boardroom, is now an existential priority. We need to form new relationships with inquiry, experimentation, failure and creativity to help us problem-solve out of the existential risks we face.
4. Build resilience for Black Swans
Taleb’s Black Swans are unforeseeable but extremely high-impact events. The issue is that we don’t know what we don’t know. Even for AI, the odds of these rare events and their runaway chain reactions aren’t computable.
Artificial General Intelligence: What future technological developments are imaginable (or unimaginable)? Is reaching AGI really possible, and what would be the ramifications?
Superintelligent AI systems: What happens if our creation surpasses the combined human intelligence and outgrows our ability to control it? Could it pursue goals that pose an existential threat to our species?
Extreme catastrophic failures: Cross-impacts stemming from interacting AI systems can lead to drastic and irreparable outcomes.
Responses to Black Swans include building resilient foundations and paying attention to rare events with profound impacts. However unpredictable Black Swans are, we can still be anticipatory, while implementing guardrails for the randomness of our world.
Ask not what AI might do to humans but what humanity will choose to do in relation to AI. Look for the nonobvious. Accept randomness. Be aware of cognitive bias as the modern world becomes dominated by very rare events. When Black Swans appear, rise up from the devastation.
How is the World Economic Forum creating guardrails for Artificial Intelligence?
5. Expect the unexpected from our majestic butterfly
The butterfly effect is liminal: It can mutate into the other animals. How do the Complex Five snowball as they collide?
Systemic disruption is a breeding ground for butterfly effects. Not preparing comes at a high cost. Our global systems, from food to energy, are interdependent. Impacts are not siloed; likewise, neither should our approach to assessing AI risks and future-preparedness.
AI is developing quickly, and the goalposts to remain relevant are constantly moving. Anything we think we know today in relation to AI will change tomorrow.
The best response to AI’s Butterfly Effect is to build resilience, action adaptive strategies and expect the unexpected.
Applying our Complex Five matrix to AI can help us plan possible responses. By recognizing the degrees of uncertainty, we can better prepare for a range of unknown futures and surprises ahead.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Forum Stories newsletter
Bringing you weekly curated insights and analysis on the global issues that matter.
More on Emerging TechnologiesSee all
Michele Mosca and Donna Dodson
December 20, 2024