Fourth Industrial Revolution

We asked 5 tech strategy leaders about inclusive, ethical and responsible use of technology. Here's what they said

Published · Updated
To earn digital trust organizations must ensure tech such as AI is inclusive, ethical and responsible.

To earn digital trust organizations must ensure tech such as AI is inclusive, ethical and responsible. Image: Getty Images/iStockphoto

Daniel Dobrygowski
Head, Governance and Trust, World Economic Forum
Bart Valkhof
Head, Information and Communication Technology Industry, World Economic Forum
  • Organizations must work to earn digital trust by ensuring technology works for people and planet.
  • The World Economic Forum's Digital Trust Framework has been designed to support decision makers.
  • We asked five tech strategy leaders how they are promoting inclusive, ethical and responsible use of technology.

The World Economic Forum’s Digital Trust Framework was created to support decision makers align around three goals: security and reliability; accountability and oversight; inclusive, ethical and responsible use.

In the third part of this series we focus on the inclusive, ethical and responsible use of digital technology which increasingly dominates our lives. It’s therefore vital that we can trust in the development and deployment of the products and services we encounter. The tech industry must consider the expectations and values of all its stakeholders with digital trust in mind.

Have you read?

    This means that an organization must design, build and operate its technology and data as a steward for all people, society at large, the natural environment and other stakeholders, to ensure broad access and use which result in ethically responsible outcomes. This goal also means the organization must work to prevent and mitigate exclusionary practices or other harms.

    These three dimensions are critical to achieving it:

    • Interoperability: the ability of information systems to connect and exchange information for mutual use without undue burden or restriction.
    • Fairness: requires that an organization’s technology and data processing be aware of the potential for disparate impact and aim to achieve just and equitable outcomes for all stakeholders, given the relevant circumstances and expectations.
    • Sustainability: requires an organization to take into consideration its impact on the natural environment, including climate, biodiversity, or water systems and aim to limit harm to these systems.

    In light of our article on trust in the intelligent age, we asked The World Economic Forum’s ICT Strategy Officers Community how they are promoting inclusive, ethical and responsible use. The community comprises 40 active senior strategy leaders from the most relevant companies across the technology stack.

    Here’s what some of them had to say.

    Eugenio Cassiano, SVP Strategy & Innovation, Celonis

    Whether it's Process Intelligence (PI) or AI, Celonis is committed to building intelligent systems that are not just powerful but also responsible.

    We follow a structured AI governance model anchored on the principles of fairness, transparency and accountability, for our internal use of AI and the AI solutions that we offer to our customers. We put this into practice through multiple processes which are overseen by the governance committee — a cross-disciplinary team with expertise in legal, ethics, engineering and development, data security and data privacy. This dedicated committee actively shapes our guidelines and ensures that our AI initiatives align with global standards and Celonis’ corporate values.

    AI is a truly transformative technology that will affect many, if not most, aspects of our modern lives, but it’s not just about what AI does, but how it does it.

    Eugenio Cassiano, SVP Strategy & Innovation, Celonis.

    At Celonis, we believe that both technical advancements and ethical considerations are required to mitigate risks and ensure the responsible use of AI.

    Have you read?

    Ravi Kuchibhotla, Chief Strategy Officer, Cognizant

    As AI continues to transform business and society, establishing a robust digital trust framework is essential – one that prioritizes interoperability and prevents the miscommunication that often arises from unaligned systems. A key pillar of this framework is fairness: ensuring ethical, transparent, and unbiased treatment for all. This requires designing AI systems that are auditable, capable of representing diverse perspectives, and fully transparent in their operations.

    By using multi-agent orchestration to optimize for fairness, performance, transparency, and inclusivity, our Neuro Suite of platforms balance these objectives without over-prioritizing one. This reduces bias, ensures decisions are transparent and interpretable, and promotes equitable outcomes. Our platforms enable clients to integrate AI applications that are auditable and unbiased, ensuring fairness flows through the entire value chain, instilling confidence in the integrity of the technology.

    Our platforms enable clients to integrate AI applications that are auditable and unbiased.

    Ravi Kuchibhotla, Chief Strategy Officer, Cognizant.

    One of the biggest remaining challenges is ensuring that non-technical voices are included in AI development, so that all segments of society can participate in deploying advanced technologies. A promising area is UX and UI development. As generative AI evolves, the role of UX/UI developers may shift, but non-STEM professionals with humanities backgrounds will increasingly guide ethical decision-making and user-centric design, which is crucial for creating fair and inclusive systems for all.

    Ganesha Rasiah, Chief Strategy Officer, HP

    The rapid proliferation of generative AI and the use of large language models have created potential areas of exposure that could jeopardize trust. At the top of the list is the lack of transparency.

    Today, it is largely unclear which algorithms and datasets underpin the decisions of a generative AI model. Users are challenged to understand if the output is accurate, logical, or ethical. Distorted or discriminative data introduces unintended harm that could perpetuate human bias and have catastrophic implications on society at large. There are imperfect mechanisms in AI to manage intellectual property.

    Organizations must establish and enforce guidelines to ensure fairness, privacy, and ethics are protected.

    Ganesha Rasiah, Chief Strategy Officer, HP.

    One issue is how, and from whom, data is acquired and accredited. Another is ownership of AI generated content. Compliance and governance become critically important, but these processes are often playing catch-up with AI uses.

    In response, organizations must establish and enforce clear guidelines to ensure fairness, privacy, and ethics are protected – so that AI is used for the betterment of both employees and customers. At HP, we have adopted a set of AI governance principles to drive responsible use of AI. We implement these principles through a set of well-defined programmes and processes, including gating mechanisms, steering guidelines, educational tools, and regulatory committees.

    Discover

    How is the World Economic Forum creating guardrails for Artificial Intelligence?

    Ann Marie Lavigne, VP Strategic Initiatives, Snowflake

    As AI becomes the foundation of modern business strategy, one truth has become abundantly clear: enterprise AI cannot succeed without a robust data strategy. And a data strategy isolated in silos is equally ineffective. In fact, siloed or incorrect data can cost companies up to 30% of their annual revenue, according to IDC Market Research. Meanwhile, Gartner has reported that organizations fail to use up to 97% of their data, missing vast opportunities for growth.

    Enterprises are increasingly recognizing the need for unified data platforms, but not all organizations can – or want to – centralize their data and governance. That’s where interoperability becomes crucial, ensuring responsible data access and human oversight, powering everything from day-to-day decisions to advanced AI applications.

    Not all organizations can – or want to – centralize their data and governance. That’s where interoperability becomes crucial, ensuring responsible data access and human oversight.

    Ann Marie Lavigne, VP Strategic Initiatives, Snowflake.

    By championing open formats like Apache Iceberg and cross-cloud governance, Snowflake has enabled seamless data sharing and monitoring for robust and reliable models.

    A prime example is Norwegian start-up Völur, where our interoperability capabilities empowered them to securely share data, breaking down silos and driving AI innovations that unlocked new revenue streams. This is the future: an interconnected, collaborative data ecosystem that accelerates fair and transparent AI-driven transformation.

    Our leadership in data collaboration, interoperability and unified governance is setting the stage for more inclusive, secure, and responsible data use, paving the way for AI's full potential to be realized.

    Marina Martín García, Head of Corporate Strategy, Telefónica

    The inclusive, ethical, and responsible use of technology is at the core of every decision we make. We have long advocated for data protection, giving users control and transparency over how their data is used. Digital trust is a key element of our customer promise.

    Our guiding principles for AI, ensure that we develop intelligent technologies transparently, fairly, and sustainably with a human centric approach. This includes a strong commitment to environmental sustainability, by minimizing the carbon footprint of our data centres through innovative immersion cooling technology. We are also working to develop internal solutions such as Kiri that help Telefónica make informed decisions to minimise carbon footprint in its AI and big data use cases, providing recommendations for optimal execution times, suggesting cloud locations with a lower environmental impact and guiding resource allocation.

    By leveraging consumers' trust in telcos, we can turn potential risks into opportunities, positioning the sector as a leader in digital trust.

    Marina Martín García, Head of Corporate Strategy, Telefónica.

    Interoperability is also key to our approach. Our digital ecosystem, Kernel, is based on open-source standards and integrates privacy by design to protect user data across all products. Privacy is managed transversally, with legal requirements accessible for every channel and service, ensuring that no data is exposed without complying with the appropriate legal base.

    Telcos have historically ranked among the most trusted industries for consumers. The key question now is: how can we leverage this trust and showcase our efforts in ensuring digital safety to position the sector as a leader in digital trust? By doing so, we can transform it from a potential risk into a significant opportunity for the entire industry.

    Loading...
    Related topics:
    Fourth Industrial RevolutionEmerging Technologies
    Share:
    Contents
    Eugenio Cassiano, SVP Strategy & Innovation, CelonisRavi Kuchibhotla, Chief Strategy Officer, CognizantGanesha Rasiah, Chief Strategy Officer, HPAnn Marie Lavigne, VP Strategic Initiatives, SnowflakeMarina Martín García, Head of Corporate Strategy, Telefónica

    The rise of gender-inclusive agritech and why it matters

    Piyush Gupta and Drishti Kumar

    December 19, 2024

    How investing in connectivity and digital infrastructure can be a catalyst for inclusion and empowering people

    About us

    Engage with us

    • Sign in
    • Partner with us
    • Become a member
    • Sign up for our press releases
    • Subscribe to our newsletters
    • Contact us

    Quick links

    Language editions

    Privacy Policy & Terms of Service

    Sitemap

    © 2024 World Economic Forum