How ethical AI reflects the views of its makers
With great power comes great responsibility. Image: REUTERS/Steve Marcus
Mala Anand, author of this opinion piece, is president of intelligent enterprise solutions and industries at SAP.
The rising concern about how AI systems can embody ethical judgments and moral values are prompting the right questions. Too often, however, the answer seems to be to blame the technology or the technologists.
Delegating responsibility is not the answer.
Creating ethical and effective AI applications requires engagement from the entire C-suite. Getting it right is both a critical business question and a values’ statement that requires CEO leadership.
The ethical concerns AI raises vary from industry to industry. The dilemmas associated with self-driving cars, for instance, are nothing like the question of bias in facial recognition or the privacy concerns associated with emerging marketing applications. Still, they share a problem: Even the most thoughtfully designed algorithm makes decisions based on inputs that reflect the world view of its makers.
AI and its sister technologies, machine learning and RPA (robotic process automation), are here to stay. Between 2017 and 2018, research from McKinsey & Company found the percentage of companies embedding at least one AI capability in their business processes more than doubled. In a different study, McKinsey estimates the value of deploying AI and analytics across industries to be between $9.5 trillion to $15.4 trillion a year. In our own work we have seen leaders in industry after industry embrace the technology to both find new efficiencies in their current businesses and to test opportunities with new business models.
There is no turning back.
“Creating ethical and effective AI applications requires engagement from the entire C-suite.”
”In the face of this momentum, there are calls for regulation, although creating a consistent regulatory framework across the entire globe will prove daunting. Regulators will have to balance the need to address legitimate social concerns while still encouraging innovation and productivity — all while remaining competitive in the international marketplace.
A recent paper from EY contends that AI and machine learning are outpacing our ability to oversee their use, and points out that it’s risky to use AI without a well-thought-out governance and ethical framework. EY frames these considerations as risk management, but they are really a guide to building trust when developing and deploying any emerging technology.
EY is not alone. Such organizations as the World Economic Forum’s Center for the Fourth Industrial Revolution, the IEEE, AI Now, The Partnership on AI, Future of Life, AI for Good, and DeepMind, among others, all have created principles designed to maximize AI’s benefits and limit its risks.
Unpredictable Risks?
In our experience, the organizations that have managed these issues best are those that embrace a new way of thinking, one that acknowledges that while consequences can be unintended, they are not necessarily unpredictable.
In other words, let’s not conflate errors, poor judgement, and shortsightedness with unintended consequences in order to shirk responsibility. As technology threatens to push ahead of society’s checks and balances, all business leaders must ask themselves about the potential impact of their own applications.
Organizations can work to ensure the responsible building and application of AI by focusing on very specific business outcomes to guide their efforts. Designing purpose-built applications for well-defined business outcomes can act as a guardrail for responsible growth, can limit the likelihood of unintended consequences, and can surface negative implications early enough to mitigate them.
But this is only true where company values are clear and the C-suite works with IT to apply those values to the many decisions that go into creating an application.
We have seen this in our work with 25 different industries. Best practices are emerging. While the ethical concerns are different from application to application, the same framework can help. The organizations that focus on well-understood business outcomes are best positioned to develop responsible applications where personalization is a value not a violation, where data sources are vetted for bias, where data privacy is a guiding principle, and where transparency and efficiency are equally valued.
When businesses adopt AI to help solve a specific business need — automated billing, general accounting, budgeting, compliance, for instance, or procurement, logistics, and customer care – the impact is more knowable and manageable than simply adopting AI for AI’s sake. Because these outcomes are central to their organization’s core mission and so well understood, it’s a small step to map company values onto the development process.
Core Business Applications
Let’s say for instance that you are the manufacturer of vehicle components. By bringing together connected devices, predictive analytics, AI and machine learning to improve manufacturing deep learning performance, you can avoid costly downtime by using data about the health of factory equipment. If you have a passion and a core company value for customer service, this might be worth considering in your application development.
“In a recent PwC Survey, 77% of CEOs said AI would increase vulnerability and disruption to the way they do business.”
”Or, let’s say you are building a different kind of application in service of a different kind of outcome — like identifying sales patterns across different regions to help you execute more focused campaigns. Your strategy might involve connected devices, analytics, AI and embedded machine learning. You could also develop a way to track the flow of materials with IoT. With sales histories by outlet, your in-store audits can be monitored to benchmark the success of campaigns. As a result, your annual business plan could be available within minutes, and your automatically generated reports would eliminate weeks of manual data preparation.
These cases may seem tame by comparison to some of the high profile examples in the news, but they are typical of the kinds of core business applications we are seeing being developed in company after company. They are defined around a well-understood outcome that is both central to the company’s core mission and to its value system. Both examples demonstrate how a company can anticipate and avoid unintended consequences by focusing the application where they already have great insight, where assumptions are transparent, where the core data is familiar and reliable, and perhaps most importantly, where the stakes are so high that senior leadership must have oversight.
CEO recognition of the risk is a good first step. In fact, in a recent PwC Survey, 77% of CEOs said AI would increase vulnerability and disruption to the way they do business.
And if an appeal to values doesn’t resonate with the C-suite, self-interest should. A recent study by the Capgemini Research Institute concludes that customers and employees will reward organizations that practice ethical AI with greater loyalty, more business, and even a willingness to advocate for them — and will punish those that don’t. The study confirms there is both reputational risk and a direct impact on the bottom line for companies that do not approach the issue thoughtfully.
In our experience across industry after industry, the most responsible AI occurs when company leadership is fully engaged, when applications are defined by clear business outcomes that are central to the company mission, and when IT and leadership collaborate to confront business and ethical quandaries together.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Digital Communications
Related topics:
Forum Stories newsletter
Bringing you weekly curated insights and analysis on the global issues that matter.
More on Fourth Industrial RevolutionSee all
Mihir Shukla
December 23, 2024