Emerging Technologies

Where are the charities in the great AI debate?

Technologists alone can't control the risks to society. Image: Aaron Paul

Rhodri Davies
Head of Policy, Charities Aid Foundation
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Emerging Technologies?
The Big Picture
Explore and monitor how The Digital Economy is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

The Digital Economy

Artificial intelligence has become a hot political and cultural topic.

How has this complex and seemingly niche subject moved to the forefront of mainstream debate?

The simple answer is that it is no longer niche. The pace of development of AI in recent years has accelerated enormously, driven by many factors including the introduction of powerful “deep learning” algorithms, a massive proliferation of data for these algorithms to learn from, and significant increases in investment.

Algorithmic processes now affect many aspects of our lives ─ whether we know it or not. Which makes the notable absence of one player from the debate surrounding AI all the more glaring: civil society. This must be remedied if we are to maximise the positive potential of AI whilst minimizing the risk of harm.

Improving lives

AI brings huge opportunities for civil society organisations (CSOs) to improve the lives of people and communities around the world.

Loading...

The Lindbergh Foundation, meanwhile, has partnered with the start-up Neurala to apply machine-learning to video from drone surveillance in game reserves, and develop algorithms that can predict poacher behaviour, and thus enable more effective interventions.

But there is a growing pool of examples of the risks AI poses to civil society.

Some stem from deliberately malign uses of the technology (as outlined in a recent paper) or the use of algorithms to generate targeted misinformation and propaganda in order to influence public opinion and elections.

Equally important is the risk of unintended negative consequences. It is increasingly apparent, for instance, that when machine learning algorithms are applied to data that contain historical statistical biases for factors like race or gender, they very quickly reflect and even strengthen those biases unless steps are taken to mitigate this danger.

With the populace outside the political and corporate world most vulnerable to this kind of machine-driven decision-making, it has never been more urgent to bring civil society into the wider AI debate.

Lack of awareness

Charities and non-profits do not have a seat at the table in many forums where these issues are being debated. Conversely, many civil society organisations (CSOs) may not yet be aware of the issues or understand their importance and relevance to their work.

We cannot just accept this.

CSOs represent many of the most marginalised individuals and communities in our society; and since these groups are likely to be hit soonest and hardest by the negative impacts of AI, it is vital that the organisations representing them are in a position to speak out on their behalf. If they do not, then not only will those CSOs be failing to deliver on their missions, but also the chances of minimising the wider harmful effects of AI will be significantly reduced.

So, what needs to be done to ensure that CSOs play their full part in shaping the development of AI for the better?

The implications of getting AI wrong are so far-reaching that decisions about its future cannot simply be left up to technologists.

Partly it is an issue of education and skills. CSOs need support if they are to get to grips with AI and help identify ways in which it can be put to use for societal good; as well as playing a key role in identifying some of the risks and potential unintended consequences.

And this is important: the implications of getting AI wrong are so far-reaching that decisions about its future cannot simply be left up to technologists.

A broad range of communities and organisations representing different viewpoints must be brought into the debate; and if they require up-skilling to make that happen, then the onus is on governments and the tech industry to ensure they get the support they need.

It is also imperative that any civil society involvement is meaningful; and that CSOs are valued for the perspective they bring. This must be reflected in the approach of policymakers.

Have you read?

We are already seeing policy; the UK is keen to position itself as a world leader when it comes to the field of AI ethics. It is particularly well-placed to do so, given the role the nation has played in the historical development of AI. A number of new partnership institutions have been established with names that reflect this rich heritage (e.g. The Alan Turing Institute, the Ada Lovelace Institute). There has also been a great deal of parliamentary interest, with groups established in both the House of Commons and House of Lords to explore AI.

Any governmental strategy for AI or the wider Fourth Industrial Revolution should acknowledge the role that civil society must play in shaping the development of new technology, as well as the impact that these technologies might have on civil society itself.

The Charities Aids Foundation’s Future:Good project aims to play a part in addressing these challenges.

Through our work we have been helping to drive the debate over the impact of disruptive technologies like AI and blockchain on philanthropy and nonprofits, and we want to continue to act as a focal point that can help to inform CSOs about key issues, whilst also highlighting to governments and tech companies the value and importance of engaging with civil society.

But we also know that we are not alone, and that there are many others in civil society around the world that care deeply about these issues too. It is vital that all of us find our voices and speak out. That way we can ensure the people and communities we serve can reap the benefits of the fourth industrial revolution, rather than finding themselves its unfortunate victims.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Emerging TechnologiesCivil SocietyResilience, Peace and Security
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Equitable healthcare is the industry's north star. Here's how AI can get us there

Vincenzo Ventricelli

April 25, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum