Emerging Technologies

Why building consumer trust is the key to unlocking AI's true potential

Image: Gerd Altmann on Pixabay

Ajay Bhalla
  • The pandemic has accelerated business' uptake of AI technology.
  • Without consumer confidence that AI is being used ethically and transparently, however, its full potential will not be realised.
  • Companies must start putting the systems and procedures in place to earn that trust without delay.

We are living through one of the most challenging and devastating health crises in living memory. This year has brought untold loss of life and livelihoods, the true worldwide repercussions of which are still to be seen. The COVID-19 pandemic has also altered the global business landscape, accelerating the pace and volume of data created through increased remote working and digital transacting, and fast shifting the economic realities for all business leaders. Against this backdrop, technology – artificial intelligence (AI), in particular – now has an even bigger role to play in helping organizations and countries to adapt, keep us safe and improve how we live and work.

But the thirst and drive to innovate with these new technologies at speed must be balanced with the need to carefully build consumer trust in those same innovations. Bringing consumers on this journey will be key. A new report from Longitude explores this precise balance between engendering trust in technology and fostering AI innovation. This is a topic that I am very passionate about and one that I spoke on at Davos this year, so I want to explore some of the report’s findings in more detail here.

Have you read?

Transparency is everything

Until recently, there has been too much focus on what AI can do and not how it does it. Today’s organizations must be able to demonstrate that their systems and algorithms are responsible, fair, ethical and explainable. In a word, that their AI is trustworthy.

High-profile cases of misuse of AI by global technology firms have dented consumer trust in AI. The subsequent fallout has also raised greater global awareness of the broader issues around the use of data and our personal information.

The result? Trust in technology can no longer be assumed – it must be earned. In this sense, organizations must think of their technology as ‘guilty until proven innocent’. The onus is on them to proactively demonstrate the responsible use of their technology to the world and to be prepared to explain and justify decisions made by those systems when required. Here we have the right to meaningful information about the logic, significance and envisaged consequences of automated decisions or what is also called ‘the right to explanation’, as laid out in the EU’s General Data Protection Regulation (GDPR). Businesses must consider how they apply these technologies – only using personal information when it is needed and with the user’s consent. By building these principles into AI as it is developed, businesses can ensure that it is ethical and transparent from the outset.

We are already seeing the impact of this transition to ethical AI. A recent Capgemini study found that 62% of consumers placed more trust in a company whose AI was understood to be ethical, while 61% were more likely to refer that company to friends and family, and 59% showed more loyalty to that company. Those who openly communicate in this way about how their technology works are more likely to be trusted by consumers to use AI to its full potential.

AI is our best problem solver

The COVID-19 pandemic has accelerated digitalization at a rate we could have never imagined. This has created a volatile environment with a plethora of challenges to overcome and opportunities to exploit. In the payments industry, we have had the challenge of protecting consumers and businesses against an explosion in cyberattacks and fraud – for example, our NuData technology, which verifies users based on their inherent behaviour, has seen attacks become more sophisticated, with one in every three attacks now emulating human behaviour. Account creation attacks, where bad actors create fake accounts for subsequent fraudulent use, have increased by 500% during the pandemic, compared to the same period in 2019 – one global retailer experienced a 679% increase in suspicious account creations alone. Overall, global fraud rates have hit a near-20-year high, according to the latest PwC figures, with 47% of companies reported to have experienced fraud over the past two years.

Image: Capgemini Research Institute

It would have been impossible to maintain our defences without the implementation of AI on our network. It is, and will continue to be, a vital part of adapting to and securing this new world. As AI becomes more powerful and pervasive, we must put systems in place to ensure that it is developed and deployed ethically.

Consumer driven, consumer focused

Consumers create a huge amount of data. By 2025, we will be creating an estimated 463 exabytes every day. And that is only going to increase – the oft quoted formula is that 90% of the data that has ever been created was created in the last two years. AI-driven systems have been invented to help turn some of this information into recognizable benefits for the people who create that data – making our lives work for us.

But AI is a technical and complicated tool. The trust that is needed for it to be most effective will come when consumers see and feel its real-world benefits in action. In this sense, trust can be a key differentiator – a competitive advantage for businesses. Only those who are trusted to operate AI will be able to maximise the benefits of its value-added services in years to come. Not only can AI deliver safety for consumers online or revolutionise their shopping experiences; it is also revolutionising farming as well as giving the environment a new lease of life. For those that get it right, the possibilities are endless.

The big picture

At times of such uncertainty, it can be difficult to look too far ahead. But now is the time for business leaders to take a step back and look at the bigger picture. The landscape has changed, and that change is permanent. Our digital futures have been brought forward and society will continue to demand higher levels of transparency in the way that AI is used to solve new challenges.

Responsible development of, and engendering trust in, technology will be crucial to business success in the ‘next normal’ – but more importantly, to building a world that is more prosperous and more equal for all.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

The Digital Economy

Related topics:
Emerging TechnologiesHealth and Healthcare Systems
Share:
The Big Picture
Explore and monitor how The Digital Economy is affecting economies, industries and global issues
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.

Subscribe today

Here’s why it’s important to build long-term cryptographic resilience

Michele Mosca and Donna Dodson

December 20, 2024

How digital platforms and AI are empowering individual investors

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum