Emerging Technologies

Ethics in AI: Can philosophy help us make better tech?

Marble sculpture of Galileo Galilei contemplating the nature of the universe.

Tackling some of the big questions in artificial intelligence. Image: K Mitch Hodge/Unsplash

Robin Pomeroy
Podcast Editor, World Economic Forum
Joe Myers
Writer, Forum Agenda

Listen to the article

  • A computer scientist and a philosopher join the Radio Davos podcast to discuss why ethics is such a key issue in the development, use and regulation of AI.
  • Subscribe here to get the whole podcast series. Find the episode page and transcript here.

Generative artificial intelligence (AI) has huge potential, but it doesn't come without risks or challenges. This latest episode of our special Radio Davos series on the technology is focused on ethics, and why it's such a central issue as we develop, use and regulate AI.

We hear from Cansu Canca, a philosopher from Northeastern University and Ethics Lead at the Institute for Experimental AI there. And, we get the perspectives of Sara Hooker, who leads the non-profit research lab, Cohere for AI, along with those of Connie Huang, a lead on metaverse and AI value creation at the World Economic Forum.

You can listen wherever you get your podcasts – here are some key quotes from the episode.

Loading...

A question of fairness

Canca joined Radio Davos to explore some of the vital questions AI raises for her as an applied ethicist.

"In order to talk about how to optimize for fairness or how to have fair algorithms, we have to be able to define what we mean by fair," she says. "And in the definition of fairness, the understanding of in which context which theory of fairness is relevant comes from the discipline of philosophy and moral and political philosophy."

The question for Canca is less why she is involved in the conversation, but why more of her colleagues aren't. "A lot of the decisions that go into policymaking, that go into day-to-day decision-making while we are developing and using AI systems, have ethical decisions embedded in them, whether we do them implicitly or explicitly."

Have you read?

Why these questions matter

Canca is keen to stress that, currently, AI systems don't have the agency to act ethically or unethically. "We are sort of rebranding it as responsible AI for good reasons," she says. "I think ethical AI gives the impression as is if an AI system has the agency and has the ability to act ethically whereas, at least now, we are not there yet."

But there's still a lot to understand and to consider, she stresses, regardless of how you brand it. And she believes there's a role for regulation to mitigate risk, as well as for the companies and organizations involved in developing and deploying the technology.

And these aren't abstract issues.

"When you think about why, for example, the question of fairness matters, it's because by embedding AI systems in our daily lives into our society, what we are really doing is creating structures upon which we will live," says Canca. "And if the underlying structures are unfair, there is no hope for us to be able to create a fair society."

She believes society needs more education when it comes to understanding the risks.

"The risks are, to be clear, huge, but they are usually not the risks that the public is imagining or the newspapers are making them believe. It's not a Terminator situation, but the fairness question kills people. If you are not able to get healthcare because your risk is judged as much less than another person's, wrongfully, that kills you. So the fairness question is not just an abstract question."

As she summarizes, "It's not the humanoid that's coming after you that's going to kill you. But it is a very mundane risk assessment system that's going to kill you if you don't fix this."

Governance

From large language models to traceability and auditing, Sara Hooker of Cohere for AI gave us a great background on some of the terms we need to understand and discuss ethical AI.

And a lot of what we're excited about with generative AI – for example around ChatGPT – has taken longer than might be appreciated.

"It's interesting because what you see now and what you're excited about and, I sense, engaging with is actually the culmination of a few different separate steps," she says. "So, to researchers, it has been kind of a slow build but it's connected very viscerally with people."

And Hooker believes regulation does need to happen: "As someone who's worked on this technology for a long time, there's no denying that it's a stepwise shift in the power of the models that we have.

"As a researcher, I worry a lot about this because in some ways it's so exciting to have so many people connect with the work that you've been doing for a long time and to feel excited and feel like they understand it," she says.

"Because I think a lot of what's changed with this technology is people feel like they're interacting with an algorithm. But it also causes concern I think for a lot of researchers that your ideas, typically are still research ideas, are being adopted by millions of people around the world and are being used in very different ways."

Discover

How is the World Economic Forum ensuring the responsible use of technology?

It's time to talk

Methods to substantiate and trace development are going to be vital as use of the technology expands, Hooker says.

"We need ways to verify that the behaviour is what we expect and that these models are able to perform in a robust way when they encounter new data in the real world," she says.

"We don't have good traceability for these models right now. So once they're in the open, they can be used in a variety of ways and there's not a good way to trace back what model was used," she adds while discussing the potential for release under license or the need for auditing.

But, it's important we're having these conversations, she concludes.

"Personally, as a researcher, I'm very much in favour of just us having richer, more precise conversations about this because some of it almost amounts to what is feasible, what we can standardize as best practices, as well as making sure there are researchers in the room – along with policymakers and users – and thinking about the implications for each of those groups."

As the Forum's Connie Huang summarizes, "As much as we look at all the opportunities, we also have to balance it with research and just having a common awareness around challenges and trade-offs – the unintended consequences that might come with the adoption of our technologies."

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Artificial Intelligence

Share:
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

How AI could expand and improve access to mental health treatment

Hailey Fowler and John Lester

October 31, 2024

3 strategies for using generative AI to responsibly extract data insights

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum