Emerging Technologies

What is the 'perverse customer journey' and how can it tackle the misuse of generative AI?

A close up of a mobile phone screen, illustrating the potential of generative AI

The democratization of generative AI has led to significant misuse by bad actors Image:  Solen Feyissa

Henry Ajder
Founder, Latent Space Advisory
  • AI-generated content has become increasingly realistic, advancing in areas such as voice cloning and personalized avatars at an astonishing speed.
  • This supercharged dynamic has led to many exciting creative, commercial and prosocial applications, but it has also catalysed massive growth in the misuse and weaponization of generative AI.
  • The 'perverse customer journey' - or increasing friction on a customer journey or user experience online - helps us understand how bad actors misuse generative AI and where we can effectively intervene.

Generative AI’s meteoric rise can be hard to grasp. In 2019, I authored the first research mapping the landscape of deepfakes and AI-generated content. A major finding was the number of deepfakes had almost doubled from 7,964 in December 2018 to 14,678 in July 2019. Contrast this to August 2023, where one study estimated that a staggering 15.47 billion AI-generated images had been produced in just over a year. This number would be much higher today, particularly if it included AI-generated videos, voice audio and music.

The generative AI explosion was sparked by the technology becoming radically more accessible. Tools that were embryonic in leading AI research labs a few years ago are now available on thousands of user-friendly apps and services. At the same time, AI-generated content has become increasingly realistic, advancing in areas such as voice cloning and personalized avatars at an astonishing speed.

This supercharged dynamic has led to many exciting creative, commercial and prosocial applications. It has also, however, catalysed massive growth in the misuse and weaponization of generative AI. I saw the early warning signs of this shift during my 2020 investigation into an AI ‘nudification’ bot on Telegram, which enabled users to easily create hundreds of thousands of synthetically stripped images of women, mostly of victims they knew in real life.

In 2020, Telegram bots made using AI nudification tools easier and more accessible Image: Automating Image Abuse, Sensity AI

Fast forward to 2024, schools are facing an epidemic of students creating deepfake pornography of their classmates, deepfake fraud attempts increased tenfold between 2022 and 2023 and AI-generated political disinformation has caused significant incidents in this huge year for global elections. As generative AI has democratized, weaponizing it has never been easier.

Addressing this surge of generative AI misuse is an urgent challenge for governments, companies, NGOs and media organizations. But to avoid a series of disjointed and isolated responses from different stakeholders, we need a unifying framework to understand the end-to-end process of how generative AI is misused.

So, how can governments and businesses develop this understanding and better respond to the misuse of generative AI? The first step is to map the 'perverse customer journey.'

Discover

How is the World Economic Forum creating guardrails for Artificial Intelligence?

Charting the 'perverse customer journey'

Creating a customer journey is a well-established practice in marketing and user experience (UX) design. It describes a consumer’s experience of becoming a loyal customer, from forming brand awareness to making a purchase and becoming a brand advocate. A key reason companies map this process is to identify points of friction that might stop a consumer from becoming their loyal customer, such as low brand visibility or a frustrating website experience.

The 'perverse customer journey' flips this model on its head. Instead of aiming to reduce friction, we can map the experience of a bad actor misusing generative AI to identify critical stages where we can increase it.

To give an example, let’s take someone who decides they want to create deceptive voice audio of a politician. A simplified set of steps they would undertake on the 'perverse customer journey' include:

• Identifying a specific voice-cloning tool

• Sourcing training data and generating the compromising voice audio

• Publishing the compromising synthetic voice audio

At each of these steps, various stakeholders play a facilitating role. Search engines, app stores, advertising and online forums may help them identify a voice cloning tool that can be weaponized. From here, open-source libraries, cloud-computing services and voice cloning apps provide the resources and tools needed to generate the deceptive synthetic voice audio. Finally, social media platforms, news organizations and messaging apps help the bad actor spread the synthetic voice audio to audiences.

By understanding the 'perverse customer journey' from ideation to execution, we can identify where interventions could be introduced, such as de-ranking search results, withdrawing digital payment support and introducing biometric verification and watermarking outputs by default. If multiple friction points are deployed with an awareness of how they interlink and compound, we can make misusing generative AI an intentionally more painful process.

Have you read?

A pragmatic response to complex challenges

The 'perverse customer journey' doesn’t just help us understand the bad actor’s experience misusing generative AI and where we can most effectively intervene. It also encourages a more pragmatic approach to those interventions.

The uncomfortable truth is that generative AI misuse can’t be eradicated. Well-resourced bad actors will always find new ways to weaponize the technology and bypass safety restrictions. Seeking 'silver bullet' technological solutions or calling for blanket bans on tools that could be misused is an understandable reflex, but fundamentally the wrong approach.

Instead of framing our efforts to combat the misuse of generative AI in terms of eradication, we should focus on well-defined actions that can meaningfully reduce it. Creating strategic points of friction can raise the access barrier to misusing generative AI.

Having introduced the 'perverse customer journey' at several AI summits, it has proven particularly useful for governments and businesses. For politicians and policymakers, it provides an intuitive framework for understanding how generative AI comes to be misused and where stakeholder groups or policy levers on the journey may have been overlooked. For businesses, it helps reveal how their products and services are interwoven with others in facilitating misuse and where they could introduce precise countermeasures collaboratively.

We need all stakeholders involved in the' perverse customer journey' to play their part. Proactive efforts and voluntary commitments from companies should be welcomed, but governments still have a critical role in bringing key stakeholders to the table. For bad actors intentionally facilitating the perverse customer journey, criminal legislation and well-resourced enforcement will be essential.

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.

Subscribe today

Here’s why it’s important to build long-term cryptographic resilience

Michele Mosca and Donna Dodson

December 20, 2024

How digital platforms and AI are empowering individual investors

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum