Emerging Technologies

This is why AI has a gender problem

A humanoid robot named "Yangyang" shows a facial expression during its demonstration at the Global Mobile Internet Conference (GMIC) 2015 in Beijing, China, April 29, 2015. The android was produced jointly by China's Shanghai Yangyang Intellegent Robot Science Service center and Japanese professor Hiroshi Ishiguro, with the aim of popularising robotics among the young. REUTERS/Kim Kyung-Hoon TPX IMAGES OF THE DAY - GF10000076986

Male engineers often build humanoid robots to look like women. Image: REUTERS/Kim Kyung-Hoon

Pascale Fung
Director of the Centre for Artificial Intelligence Research (CAiRE) and Professor of Electrical & Computer Engineering, The Hong Kong University of Science and Technology
This article is part of: Annual Meeting of the New Champions

“Sorry, I don’t know that one.” Alexa, Cortana, even the automated announcements on public transport – they all have one thing in common: a female voice or female avatar.

I have been working on dialogue systems since the first generation of projects from the early 1990s. By the end of that decade, many American call centres were answered by robot “assistants” who would cheerfully greet customers with “How may I help you?” and handle their various requests for flight booking, movie ticketing and so on.

I proposed the concept of “empathetic machines,” which are dialogue systems that can detect human emotions in addition to querying content. In 2015, I and my students set out to showcase our work in interactive dialogue systems in a demonstration for the World Economic Forum.

To give our empathetic machine an avatar, we looked at robotic cartoons like Wall-E, or Big Hero 6, but they seemed so inhuman and hard for users to relate to. I remembered the “paper clip” assistant for Windows from years back and that few users felt inclined to engage with that “virtual assistant”.

Little did I know that the creation of the “empathetic” avatar would plunge me and my students deep into the history of robots, virtual assistants, sci-fi, and gender roles in the AI world.

The virtual world

The very first “chatbot” was named Eliza even though it is a silent software programme without any voice or physical form. When I took the “Introduction to Artificial Intelligence” class at Columbia University, one of our first programming assignments was to create our own version of Eliza.

In 2016, Microsoft created a chatbot named Tay, with the tweeting style of a “16-year-old teenage girl”, who learned to tweet from millions of online Twitter users. Microsoft Xiaoice, who has more than 100 million users on WeChat, is also a female avatar.

Sophia by Hanson Robotics and Erica by Hiroshi Ishiguro are perhaps the most famous humanoid “celebrities”. Sophia spoke on 60 Minutes with Charlie Rose, appeared on the Tonight Show with Jimmy Fallon, was covered by major newspapers, and had an on screen date with Will Smith. Saudi Arabia bestowed her with citizenship.

Loading...

Professor Ishiguro, the creator of Erica, famously said that he was trying to make “the most beautiful woman” in Erica. There are different versions of Erica ranging from “demurely conservative” to “sassy and stylish”. When Japanese men went on a “date” with Erica, they blushed while flirting with her even though they were fully aware that “she” was actually a machine.

Loading...

What Sophia and Erica have in common, is that they are both conversation companions. Are virtual assistants and conversation companions female because of the social belief that women are better in these roles?

Meanwhile, physical robots like Hermes the rescue robot from MIT, and Atlas, the robot that can Parkour, from Boston Dynamics, look distinctly male in their shape and form. Is it because they are stronger with a more male looking shape?

The Team IHMC Robotics Atlas
Atlas Image: REUTERS/Patrick T. Fallon

Big tractors and vehicle lifters are certainly strong but assume no human gender form. The drones created by DJI, a company founded by one of our students, Frank Wang and my colleague Prof Zexiang Li, are robots with autonomous navigation but they do not look like either gender.

In their paper on the gender stereotyping of robots, Florian Dufour and Céline Ehrwein Nihan state: “When no objective characteristics (congruent with a success on a task) of robots are known, social beliefs about gender will have a strong influence on the perceiver’s judgments”. Are search and rescue robots built in the male form because of the social beliefs about the gender of human search rescuers? Do gender roles in AI and robotics simply reflect those of the physical world?

The physical world

Data from the American Psychologist Association's Centre for Workforce Studies show that women make up 76% of new psychology doctorates, 74% of early career psychologists and 53% of the psychology workforce. According to the US Department of Labor, 94.6% of secretaries and administrative assistants are women, while 96.5% of firefighters are men.

Taking a closer look at AI, the World Economic Forum identified the gender gap in the talent pool by surveying Linkedln users who self-identified as possessing AI skills. It showed that 78% of them are male. Element.ai sampled AI researchers from three AI conferences in machine learning and machine vision in 2017 and found that across 23 countries, 88% of the researchers are men.

According to this WIRED article: “At Google, 21% of technical roles are filled by women, according to company figures released in June". When WIRED reviewed Google’s AI research pages earlier this month, they listed 641 people working on “machine intelligence”, of whom only 10% were women.

Facebook said last month that 22% of its technical workers are women. Pages for the company’s AI research group listed 115 people earlier this month, of whom 15% were women.

The article also noted: “The scarcity of women among machine learning researchers is hardly surprising. The wider field of computer science is well documented as being dominated by men. Government figures show that the proportion of women awarded bachelor's degrees in computing in the US has slid significantly over the past 30 years, the opposite of the trend in physical and biological sciences.”

From boy genius to king-maker

This decline in female participation coincided with the rise of the computing industry – Apple went public in 1980, Microsoft in 1986. The internet era saw the rise of Amazon, Ebay, Google, Facebook, Paypal, and more.

One of the important indicators of gender roles in tech in the last 30 years, and in computing and AI in particular, is the image of the “boy genius” who worked in a garage and changed the world. From Bill Gates, to Steve Jobs, Jeff Bezos, to Jack Ma of Alibaba and Pony Ma of Tencent, the “kings” of the digital era and the AI era are male. But who are the king-makers?

Venture capital (VC) investment fuelled the growth of these companies. To make it big, all tech entrepreneurs need investment from venture capitalists and all the founders mentioned above took investment from these “king-makers” at a critical stage of growth.

However, only 1-2% of the startups that receive VC funding are led by female founders, even though female-led companies make 200% returns on investment. This is the most gender-biased segment in the tech industry.

The reasons for that are complicated. VCs are overwhelmingly male themselves, and investors trust people who are more like them.

In addition, research has shown that men are promoted for their potential, while women are promoted for their achievements. VC investment is all about betting on the potential of the founder.

As more male-led companies are funded, there are more success stories among them, and the more VCs choose to fund male-led companies. The gender roles are reinforced and the representation of women continues to decrease.

Fiction reinforces stereotypes

The popular media further reinforces this image of the “boy genius” or “wunderkind”. TV shows from “Silicon Valley” to “Big Bang Theory” and movies from “The Matrix” to “Iron Man” are rife with portrayals of male tech titans.

Nevertheless, one could argue that art is merely reflecting life. Elon Musk is widely touted as the real-life “Iron Man” – in fact he had a cameo role in the movie Iron Man 2.

It seems that the superstars of the internet and AI era are all male in the popular imagination. The fictional male techie characters are often portrayed as “adorable dorks”, socially awkward nerds who are smart and eventually get the girl.

In our recent research on analysing the language of male characters in popular media, we have found that male techie characters in these shows consistently make sexist comments, often cast as part of their “social clumsiness”.

This stereotype is so strong that, when investors or business customers meet the rare female tech founder, they would blurt out their disbelief, and when students meet female AI researchers whose papers they had read previously they would also express shock in discovering the gender of the author.

We also see that AI is subject to gender stereotypes in science fiction. From the first sci-fi film “Metropolis” to the more recent “Ex-Machina”, to popular TV series such as Humans and Westworld, we see female androids created by men in a sexualized form, starting in submissive servitude or becoming rebellious femme fatales.

We are used to heroes and villains with physical power in sci-fi being male – the Terminator, Nero in Matrix, Ultron, etc. But there have been few attempts at creating female robots with physical power, such as T-X in the Terminator.

Virtual assistants in sci-fi films, such as Samantha in “Her”, mostly have a female voice. The exceptions are Jarvis, the wise guru to Iron Man, and he is given more than just an assistant role; and the menacing HAL in 2001: Space Odessey.

Image: Tyler Schnoebelen

Virtual agents

As we designed our avatar for the dialog system, we looked at the history of virtual agents, trying to avoid the stereotypes from science fiction.

Psychologists at one of the two leading speech technology companies from the 1990s in the US ascertained that consumers prefer the voice of a woman when they are trying to get assistance from a call centre operator.

Ever since, virtual assistants have been designed in a submissive personality type, with a calm, polite, and at times eager and cheerful female voice.

To design our system, we studied the preferences of users to different perceived personalities in a virtual agent, ranging from submissive, authoritative to neutral, and found that users did not have a clear preference for the submissive type.

Our team decided to choose a female avatar and create a feminist persona for her. She is Zara the Super Girl, who converses with the user to detect their personality and offer empathetic reactions to their stories.

Loading...

She also detects sexist language and responds with a comeback. Our team continues to work on enabling chatbots to be more personalized by automatically detecting and adapting to the human interlocutor. We endow our chatbots with better memory, and higher capability of detecting human emotions and responding with empathy. More importantly, we avoid hard-coding these capabilities into chatbots, by enabling them to learn from human-to-human conversations, while paying attention to personality, effect, and emotional traits.

Sexist and racist language are common on the internet. The chatbot Tay from Microsoft learned such language from online tweets. In 2018, our team worked on detecting both sexist and racist language from online tweets using a two-step classification scheme – first each tweet is evaluated as either abusive or not, then among the abusive set, our system detects whether it is sexist or racist.

This two-step scheme was an attempt to mitigate the problem of inadequately labelled data for training. In fact, the lack of large amounts of labelled data leads to the common phenomena of data and machine bias towards the dominant demographic group.

Research teams around the world are working to solve this problem. Most prominently, we have seen workshops specializing on abusive language detection and the detection of sexism at the top natural language processing conference. Facebook AI has a group working on “machine integrity” issues. Google has made efforts in reducing search bias, selection bias, data bias and latent bias in machine learning.

A partnership on AI

In 2017, American AI companies, including Google, Microsoft, Facebook, and Amazon, started the Partnership on AI to Benefit People and the Society. This Partnership has since grown to 80 partners from 13 countries, including China’s Baidu. Over 50% of the partners are non-profits.

This is one of the biggest groups with a substantial representation from the AI engineers working with non-experts to come up with best practices and guidelines in different areas of AI.

I am part of the group working on human-machine interactions and our work includes looking at the best practices for designing fair and accurate virtual agents. One of the group leaders is Hiroaki Kitano, the father of AIBO, a robot dog that endeared tens of thousands of Japanese elderly people, without resorting to specific gender stereotypes. This approach is gaining traction. Siri users, for example, can choose the gender of its voice.

To create a gender equal world of AI, we need conscientious work from the AI research community to recruit and promote more female talent; we need to find technical solutions for fair and accountable AI; investors need to support more female founders; and we need to create more images of AI and robots free of gender stereotypes in the media.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Artificial Intelligence

Share:
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

How digital twins are transforming the world of water management

Anja Eimer

November 1, 2024

Balancing innovation and governance in the age of AI

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum