Fourth Industrial Revolution

Telling languages apart may begin in the womb

An ultrasound image of surrogate mother Manjula, 30, is seen on a monitor at the Akanksha IVF centre in Anand town, about 70 km (44 miles) south of the western Indian city of Ahmedabad August 24, 2013. India is a leading centre for surrogate motherhood, partly due to Hinduism's acceptance of the concept. The world's second test tube baby was born in Kolkata only two months after Louise Brown in 1978. Rising demand from abroad for Indian surrogate mothers has turned "surrogacy tourism" there into a billion dollar industry, according to a report by the Law Commission of India. Picture taken August 24, 2013. REUTERS/Mansi Thapliyal (INDIA - Tags: HEALTH SOCIETY)ATTENTION EDITORS: PICTURE 24 OF 33 FOR PACKAGE 'SURROGACY IN INDIA'TO FIND ALL IMAGES SEARCH 'SURROGACY ANAND' - RTR3FFET

Utako Minai says that based on studies, it is clear that fetuses can hear speech whilst in the womb. Image: REUTERS/Mansi Thapliyal

Rick Hellman

A month before birth, fetuses can distinguish between someone speaking to them in English and in Japanese.

“Research suggests that human language development may start really early—a few days after birth,” says Utako Minai, associate professor of linguistics at the University of Kansas.

“Babies a few days old have been shown to be sensitive to the rhythmic differences between languages. Previous studies have demonstrated this by measuring changes in babies’ behavior; for example, by measuring whether babies change the rate of sucking on a pacifier when the speech changes from one language to a different language with different rhythmic properties.

“This early discrimination led us to wonder when children’s sensitivity to the rhythmic properties of language emerges, including whether it may, in fact, emerge before birth,” Minai says. “Fetuses can hear things, including speech, in the womb.”

There was already a study that suggested fetuses could discriminate between different types of language, based on rhythmic patterns, but the current work in the journal NeuroReport, used more accurate non-invasive technology called a magnetocardiogram (MCG).

“The previous study used ultrasound to see whether fetuses recognized changes in language by measuring changes in fetal heart rate,” Minai says. “The speech sounds that were presented to the fetus in the two different languages were spoken by two different people in that study.

Speech is “muffled, like the adults talking in a Peanuts cartoon, but the rhythm of the language should be preserved and available for the fetus to hear.”

“They found that the fetuses were sensitive to the change in speech sounds, but it was not clear if the fetuses were sensitive to the differences in language or the differences in speaker, so we wanted to control for that factor by having the speech sounds in the two languages spoken by the same person.”

Have you read?

Two dozen women, averaging roughly eight months pregnant, were examined using the MCG.

Fetal biomagnetometers fit over the maternal abdomen and detect tiny magnetic fields that surround electrical currents from the maternal and fetal bodies, including heartbeats, breathing, and other body movements.

“The biomagnetometer is more sensitive than ultrasound to the beat-to-beat changes in heart rate,” says Kathleen Gustafson, a research associate professor of neurology at the University of Kansas.

“Obviously, the heart doesn’t hear, so if the baby responds to the language change by altering heart rate, the response would be directed by the brain.”

Which is exactly what the study found.

“The fetal brain is developing rapidly and forming networks,” Gustafson says. “The intrauterine environment is a noisy place. The fetus is exposed to maternal gut sounds, her heartbeats and voice, as well as external sounds. Without exposure to sound, the auditory cortex wouldn’t get enough stimulation to develop properly. This study gives evidence that some of that development is linked to language.”

For the study, a bilingual speaker made two recordings, one each in English and Japanese, to be played in succession to the fetus. English and Japanese are argued to be rhythmically distinctive. English speech has a dynamic rhythmic structure resembling Morse code signals, while Japanese has a more regular-paced rhythmic structure.

Sure enough, the fetal heart rates changed when they heard the unfamiliar, rhythmically distinct language (Japanese) after having heard a passage of English speech, while their heart rates did not change when they were presented with a second passage of English instead of a passage in Japanese.

“The results came out nicely, with strong statistical support,” Minai says. “These results suggest that language development may indeed start in utero. Fetuses are tuning their ears to the language they are going to acquire even before they are born, based on the speech signals available to them in utero.

“Prenatal sensitivity to the rhythmic properties of language may provide children with one of the very first building blocks in acquiring language.”

The National Institutes of Health funded the work.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Fourth Industrial Revolution

Share:
The Big Picture
Explore and monitor how Fourth Industrial Revolution is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

We asked 5 tech strategy leaders about inclusive, ethical and responsible use of technology. Here's what they said

Daniel Dobrygowski and Bart Valkhof

November 21, 2024

Why is human-first design essential to the future of the internet?

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum