Health and Healthcare Systems

Innovation with integrity: The 4 principles of responsible use of AI

Woman doctor with ipad and patient: Responsible AI in medicine is about balancing innovation with ethics

Responsible AI in medicine is about balancing innovation with ethics Image: Getty Images/iStockphoto

Klaus Moosmayer
Chief Ethics, Risk and Compliance Officer, Novartis
This article is part of: World Economic Forum Annual Meeting
  • Artificial intelligence (AI) in medicine must balance innovation with ethics, emphasizing transparency, trust and human-centred approaches to drive responsible advancements.
  • Corporations have a growing responsibility to address the societal impacts of AI, requiring integrated governance models and ethical frameworks to build and sustain trust.
  • The four principles for responsible AI aim to align innovation with human rights and ethical integrity.

Artificial Intelligence (AI) is revolutionizing the landscape of medicine, bringing not only unprecedented opportunities but also complex ethical challenges. At this pivotal juncture in medical history, the technology is reshaping how we approach healthcare by adding new tools and delivering precision and speed that were once unimaginable.

From accelerating drug discovery to revolutionizing clinical trials, AI’s ability to harness vast amounts of data is unlocking personalized treatments, driving breakthroughs and offering new hope in the fight against diseases like cancer.

Have you read?

Why is this important now?

At the same time, ensuring transparency and integrity is critical to building trust with patients and regulators while accelerating innovation for better outcomes. Moreover, a human-centred approach is vital when using AI to reimagine medicine.

This approach correlates with the changing role of the chief compliance officer. Once primarily focused on adherence to legal and regulatory requirements, especially in the field of anti-bribery, today’s chief compliance officers are increasingly required to adopt a more integrated approach, given the growing complexity of the regulatory landscape.

That includes emerging regulation on AI and heightened expectations from stakeholders regarding corporate behaviour.

The case for corporate responsibility for AI

Firstly, AI is an accelerator of corporate productivity and impact. With many AI models in the ownership and control of corporations and regulation slowly but surely catching up, the AI boom also becomes a magnifying glass on companies’ responsibility to consider the societal impacts of their activities.

Gaining and maintaining trust has become a hard currency for corporations. General scepticism towards globalization and profit distribution often leads to the perception that corporations lack ethical standards or struggle to uphold them consistently.

This change in regulatory requirements and societal expectations calls for new company governance and assurance models and a different leadership profile.

Over time this approach leads to an integrated assurance system which avoids governance silos in corporations and allows them to adapt with foresight to new challenges, for example, the responsible use of AI.

Any approach to AI must be grounded in a set of core principles that guide the different stages of AI development and deployment, driving innovations to be cutting-edge and ethically sound.

Klaus Moosmayer, Chief Ethics, Risk and Compliance Officer, Novartis International

Managing trust with an integrated assurance approach

Secondly, AI’s emergence over the last decade serves as a great example of the value of integrated assurance and centralized risk management.

Effective enterprise-wide risk governance will have identified the risks and opportunities associated with AI early on.

The organization could have established some principle-based guidance while the risks related to AI were still focused on specific applications in data science and largely abstract to the wider workforce.

By 2022, when the AI boom accelerated significantly and the risks became more tangible, an integrated function would have been able to rapidly establish mature policy and control frameworks based on experience with other risk areas and enterprise standards.

This type of foresight and organizational adaptability are direct benefits of an integrated assurance model and are key to helping organizations confidently seize the opportunities offered by AI technology.

Centring human rights

At Novartis, we ensure that AI systems have a clear purpose, respect human rights, and are accurate, truthful, not misleading, and appropriate for the intended context. We issue ethical principles to provide a simple framework for responsible AI.

This framework should enable people to challenge their own decisions and biases. By aligning them with our Code of Ethics and our new data and technology policy, we reinforce their importance and create a clear path for us all to follow.

Any approach to AI must be grounded in core principles that guide the different stages of AI development and deployment, driving innovations to be cutting-edge and ethically sound.

The four key principles in our Ethical use of Data & Technolgy Policy are:

1. Respect humanity

Respect humanity and maintain trust with society through deploying and using data and technology in ways that promote diversity and inclusion, respect human rights and benefit patients and society.

2. Be transparent and collect fairly

Describe clearly and simply why and how the organization collects data and what it does with data and technology.

3. Use responsibly

Use data and technology responsibly and be accountable for the appropriate management and protection of them in accordance with your scope of responsibility.

4. Protect data and technology

Apply risk-based security to protect and ensure the confidentiality, integrity and availability of data and technology throughout their lifecycle. Prevent data loss.

Loading...

A commitment to the responsible use of AI systems goes hand in hand with the desire to promote safety and sustainability in all AI-driven innovations. Ensuring ethical, transparent and accountable AI is essential for building trust and safeguarding individuals and the company.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Artificial Intelligence

Related topics:
Health and Healthcare SystemsGlobal Risks
Share:
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.

Subscribe today

Healthier for longer: Taking care of our ageing societies by addressing obesity

Camilla Sylvest

January 16, 2025

Why achieving sustainable health systems means moving past political cycles to reach long-term commitments

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2025 World Economic Forum