Without universal AI literacy, AI will fail us
AI literacy will help us navigate a world that will inevitably come to be shaped by AI Image: REUTERS
- By 2030, AI is expected to add $15.7 trillion to global GDP
- But this technology comes with risks that must be mitigated now to prepare for the future
- AI literacy will equip current and future AI adopters to deploy and use the technology responsibly and equitably
Much has been said about the potential of artificial intelligence (AI) to transform how we live, work, and interact with each other.
But we must also draw attention to a less discussed, but equally important, question — do we have the skills required to develop AI inclusively and use it responsibly?
AI has already arrived
AI adoption is accelerating, and the overall market is expected to be worth $190 billion by 2025. By 2030, AI technology will add $15.7 trillion to global gross domestic product (GDP).
AI is everywhere — whether we’re aware of it or not.
From displacement and hunger to infectious disease outbreaks and climate change, this technology has the potential to help us tackle some of the toughest global challenges. In fact, AI could enable the accomplishment of 134 targets — out of 169 — across all U.N. Sustainable Development Goals.
But, while AI holds many potential benefits for our society and the planet, it is far from perfect. There are numerous cases of AI being used, intentionally or unintentionally, to exclude and disempower individuals and communities, erode human rights, and undermine our democratic institutions.
For example, facial analysis software has been recorded failing to recognize people with dark skin, showing a 1-in-3 failure rate when identifying darker-skinned females. Other AI tools have denied social security benefits to people with disabilities.
These failings are due to bias in data and lack of diversity in the teams developing AI systems. According to the Forum’s 2021 Global Gender Gap report, only 32% of those in data and AI roles are women. In 2019, Bloomberg reported that less than 2% of technical employees at Google and Facebook were black.
Add to that a lack of transparency, awareness, and understanding of AI among the general population, and it is no surprise that a national survey found that 84% of Americans are illiterate about AI.
In this crucial moment, when AI is poised to transform every aspect of our personal and professional lives, universal AI literacy is imperative.
To democratize access to AI and ensure safe and responsible use, three steps must be taken:
1. Foster universal AI literacy
With AI already transforming every aspect of our personal and professional lives, we need to be able to understand how AI systems might impact us — our jobs, education, healthcare — and use those tools in a safe and responsible way.
Building an AI-powered society that benefits all requires each of us to become literate about AI — to know when AI is being used and evaluate the benefits and limitations of it in a particular use case that might impact us.
We cannot leave the burden of AI responsibility and fairness on the technologists who design it. These tools affect us all, so they should be affected by us all — students, educators, non-profits, governments, parents, businesses. We need all hands on deck.
2. Prioritize diversity in AI development and deployment
AI is not perfect, and that is partly due to a lack of diversity and representation on the teams designing the technology. When development is guided by the needs, contexts, and values of a select few, the needs of many others are often excluded.
Further, AI systems are being deployed in real-world contexts that are not guided by the same ethical values, nor are they required to be. This could mean AI systems are used for discrimination and human rights abuses, or to undermine institutions.
For AI to be beneficial to everyone in our society, we must take deliberate steps to diversify the teams that are building it.
For example, AI4All is reaching students at the high school age, when they are old enough to consider AI as a career path and young enough to start to think about AI literacy and ethics as part of their overall AI education. NetHope is training nonprofits to understand the benefits, limitations, and risks of AI technology and learn how to design and use AI responsibly.
USAID has developed responsible AI training to ensure Agency staff are aware of not just the opportunities but also the risks involved when deploying AI in emerging markets around the world.
With the right data and more diverse teams, AI systems could even learn to advance equality.
3. Get started today
AI is no longer some futuristic idea; it’s already being integrated into every aspect of our lives and in every industry, from healthcare and education to finance and travel.
The steps we take today — in terms of where we apply AI, who participates in creating it, who can access it, and how informed we all are about its impact on our daily lives — will play an important part in shaping the future of our society. Now is the time for all of us to become AI literate.
How is the World Economic Forum ensuring the responsible use of technology?
How to build AI literacy
As you get started, it is worth considering the following.
First, when teaching about AI, meet people where they are in terms of their knowledge level, learning format, and access to infrastructure such as connectivity, power, and devices. Afterschool programs and community-based organizations can be a great environment for introducing students and young people to AI. For nonprofits, working groups, webinars and workshops are good ways to learn and stay up to date on the latest developments.
It is also important to make learning about AI accessible and relevant by contextualizing technology benefits, limitations, and risks with practical examples and case studies. This should cover what the benefits and opportunities of AI are, what can go wrong, how to mitigate risks and how to redress harm. This should be done while centering the needs and contexts of those most marginalized.
Build capacity for AI while solving real-world problems. Project-based learning, like that provided by NetHope’s Africa chatbots, can be a powerful and efficient way to build capacity while developing tools and programs that can address immediate needs.
Finally, catalyze conversations between governments and citizens, non-profits and businesses, researchers and communities. Engage everyone in envisioning the future that is representative of all of our needs and contexts.
To help you get started, we’ve curated a set of resources that our organizations have developed and open-sourced. We hope you will use it, share it, build on it.
AI4ALL: Get Involved
World Economic Forum: Shaping the Future of Technology Governance: Artificial Intelligence and Machine Learning
NetHope: AI Ethics for Nonprofits Toolkit
MIT D-Lab, with the support from USAID: Exploring Fairness in Machine Learning for International Development
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Artificial Intelligence
The Agenda Weekly
A weekly update of the most important issues driving the global agenda
You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.
More on Emerging TechnologiesSee all
Keyzom Ngodup Massally and Jennifer Louie
December 3, 2024