Why artificial intelligence is learning emotional intelligence
As we teach AI how humans behave, we improve our own understanding of human biases Image: REUTERS/Mike Blake - RC1C5B9E1CA0
I started transforming businesses with technology 35 years ago. It was as true then as it is now that the biggest risk we have to mitigate is the resistance of people and organizations to change. It is a well-known fact that three in four transformation programmes fail to achieve their intended goals because people are not prepared to adopt new processes and technology. Mitigating these risks and helping people learn new technology-enabled processes has been good for the consulting industry, and continues to be one of the keys to successful programmes.
With artificial intelligence (AI), change management and process reengineering get reinvented. What was once a one-way street has become a two-way street: we can now teach technology to relate to people, as much as we train people to use technology. Going forward, getting this human-centric design right is the biggest factor in the success or failure of AI-driven transformations.
In traditional technology-led transformations, the goal of change management has been to teach people to use technology, dedicating leadership and resources to ensure compliance and adoption. The direction is one way: people learn how technology works in order to give it commands or interpret results. Technology could only follow precise commands based on pre-defined rules, and, for the most part, technology produced results that required specific skills to interpret and apply to our professions.
With AI pilots sprawling everywhere, companies, consultants and technology firms need to rethink their approaches to transformation. To successfully implement AI projects that drive impact at scale, great AI models and algorithms are necessary, but not sufficient. One of the most important success factors is a design-led approach to human change that deeply fuses new AI capabilities with how humans prefer to engage with tools. Companies that ignore this are likely stuck with collections of AI pilots that don’t amount to any real impact.
This is a key aspect that is not yet understood by companies practicing "AI tourism". As many programmes focus primarily on machine-learning algorithms, advanced AI models and perfecting training datasets, they fail to address the most important success factors: the design of interactions and workflows, and the choreography of processes, technologies and humans. Companies are struggling to scale AI applications and realize the expected benefits, despite the plethora of AI proofs-of-concept underway across industries. The ones that do succeed demonstrate that skilled, purposeful design of workflows and user interactions lead to faster adoption and business benefits. Here are a couple of examples:
A financial services company was implementing one of the first customer-facing AI systems in its industry. Focused on getting the AI technology right, the client spent its resources verifying the accuracy of the models and readiness of the technology. Adoption of the technology did not meet the original expectations, despite robust underlying technology and a strong business case. Insufficient focus on designing the right workflows and human-machine interfaces led to slow adoption.
In contrast, another company in the same industry made a conscious decision to embed AI into its customers’ experiences. It spent months designing for a purpose, conducting user testing, and rolled out similar AI technology with a deep focus on the orchestration of functionality and user design. Within two months, more than one million customers were using it, exceeding expectations.
These cases should not surprise us. As humans, most of the decisions we make are based on how information is presented to us, and not on what is shown to us. We think of ourselves as rational individuals of free will. Yet science demonstrates that we make decisions based on bias and context more than analysis and content.
If you’re still unconvinced, look at the image below containing squares A and B. On the right, you see that boxes A and B are identical in colour. On the left, these squares are untouched, and we added context that makes square A appear darker than square B. The analytical fact is that boxes A and B are still identical in colour; the human truth is that most of us would bet A is darker in the left picture. This is a simple example of how bias and context beat analysis and content when it comes to human perception.
Emotional intelligence (EI) has been known to be a critical success factor in professional success, even more than performance or qualification. Indeed, the ability to connect and perceive with deep empathy gives a clear advantage in a world where more of our success depends on influencing other people. We are presented with hundreds of "A/B" squares every day: sometimes the "A/B" is a candidate selection, or an investment, or a product selection. People with high EI naturally have empathy to understand our context, relate to us better, and persuade us to see their desired choice as our darker square.
EI has been a hard skill to teach, and one that has not been "programmable" into technology – until now. Concurrent with the progress of AI in the last two decades, EI has also developed significantly with advances in neuroscience and tools, such as functional magnetic resonance imaging (fMRI). One could say we are learning to "reverse engineer" our own human perception rules.
I have been so passionate about this topic that a few years ago, I started a small practice to combine design and technology-led business change in a very specific way. I wanted to use human-centric design as a business re-engineering tool, focusing on people-process and people-technology design beyond the traditional process maps and productivity indicators. The premise was simple: through design we can better engineer interactions that lead people to adopt technology and behave exactly how we intend, reducing the need for training, handling exceptions or fighting for compliance. As a result, we create both delight and productivity.
The work has evolved as we have learnt and expanded well beyond our initial scope. There are many branches of it now in progress. Some of my colleagues in IBM Design, such as Adam Cutler, have been focused on what it takes to match AI and humans emotionally, to motivate people to work with these new technologies. Many universities are now working to decode the science of building relationships, and using that to train AI to interact with us in a way we can relate to and trust.
The use of personality insights alongside traditional demographics has been shown to improve prediction accuracy for consumer preferences. Tone analyzers can now read documents such as emails and tweets, and determine if the person is angry, frustrated or thrilled, then adapt the interaction dynamically to satisfy clients better.
Designing AI applications that take burdens off people and present fewer required steps to complete a task is a simple way to achieve better human adoption. Reducing the amount of data and the number of clicks creates a positive experience that drives adoption. Conversely, understanding the behavioural bias toward shortcuts, and designing for it, prevents vulnerability issues ranging from lost productivity to cybersecurity gaps.
Keep in mind some simple guidelines that lead to successful AI-led business transformation:
- Re-imagine processes that leverage the capabilities of AI. Design AI-embedded tasks optimized to their purpose, and AI-powered interfaces optimized for empathy.
- In AI-human interaction design, model humans as socio-emotional entities, rather than as analytical entities. Design human interactions to optimize for behaviour, as much as for data use or task productivity.
- Burden AI technology with elements that humans don’t like to do, such as reducing data to be memorized and clicking repeatedly to complete a task. Remember, our brains and behaviours are biased to do what is easy, more than to do what is right.
- Teach EI to your AI to create positive contexts in which human users will trust and relate naturally to make the expected choices. As many rules of behaviour are encoded into "unconscious biases", do more user testing in lieu of asking users what they want (hint: they may not consciously know).
- Develop competency in both AI technology and AI-efficient human interface design, and include it in the scope of every AI pilot.
We are seeing a new dawn of technologies that can massively enhance our ability to perform and deliver business outcomes. Thanks to AI and the fusion of technology and design, we can create tools to unburden and expand our cognition, similarly to how we enhanced human physical strength with levers, wheels, engines and motors.
As we teach AI how humans behave, we improve our own understanding of human biases and our ability to relate to each other. It’s ironic that teaching technology to be more effective at relating to humans will force us to be more effective ourselves. This could create leaps forward in our ability to work with each other and achieve greater outcomes.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Artificial Intelligence
Related topics:
Forum Stories newsletter
Bringing you weekly curated insights and analysis on the global issues that matter.
More on Emerging TechnologiesSee all
Michele Mosca and Donna Dodson
December 20, 2024