Emerging Technologies

This mind-reading AI can see what you're thinking - and draw a picture of it

Chilean software engineer Jorge Alviarez, one of the creators of Lifeware's program called LifewareIntegra that allows handicapped people to use computers, places head sensors on Jenifer Astorga (26), who suffers from quadriplegia, during a training session for her in Valparaiso city, about 75 miles (121 km) northwest of Santiago, January 18, 2011. Jenifer is the first to use the LifewareIntegra system developed by a group of computer science students at the Federico Santa Maria Technical University that permits quadriplegics to use a computer through brain activity picked up by sensors on the head device. REUTERS/Eliseo Fernandez (CHILE - Tags: SCI TECH EDUCATION SOCIETY)

'Functional magnetic resonance imaging' allows computers to visualize what people are thinking about. Image: REUTERS/Eliseo Fernandez

Adam Jezard
Senior Writer, Forum Agenda

Scientists around the world are racing to be the first to develop artificially intelligent algorithms that can see inside our minds.

The idea is not new: in the science fiction of the 1950s and 60s, crazed doctors were frequently seen putting weird contraptions on people’s heads to decipher their thoughts. British TV serial Quatermass and the Pit – in which such a machine is used to translate the thoughts of alien invaders – is a prime example.

Now reality is catching up with fantasy. In the past year, AI experts in China, the US and Japan have published research showing that computers can replicate what people are thinking about by using functional magnetic resonance imaging (or fMRI) machines – which measure brain activity – linked to deep neural networks, which replicate human brain functions.

Is it telepathy?

While headlines around the world have screamed out that AI can now read minds, the reality seems to be more prosaic. Computers are not yet able to anticipate what we think, feel or desire. As science writer Anjana Ahuja remarked in the Financial Times, rather than telepathy, “a more accurate, though less catchy, description would be a ‘reconstruction of visual field’ algorithm”.

Most of the research so far has been aimed at deciphering images of what subjects are looking at or, in limited circumstances, what they are thinking about.

Studies have previously focused on programs producing images based on shapes or letters they had been taught to recognize when viewed through subjects’ minds.

However, in one recent piece of research, from Japan’s ATR Computational Neuroscience Laboratories and Kyoto University, scientists said that not only was a program able to decipher images it had been trained to recognize when people looked at them but: “our method successfully generalized the reconstruction to artificial shapes, indicating that our model indeed ‘reconstructs’ or ‘generates’ images from brain activity, not simply matches to exemplars.”

Image: ATR Computational Neuroscience Laboratories/Kyoto University

In other words, it could decode and represent an image it had not been “trained” to see.

Think that sentence again?

Meanwhile, scientists at Carnegie Mellon University in the US claim to have gone a step closer to real “mind reading” by using algorithms to decode brain signals that identify deeper thoughts such as “the young author spoke to the editor” and “the flood damaged the hospital”.

The technology, the researchers say, is able to understand complex events, expressed as sentences, and semantic features, such as people, places and actions, to predict what types of thoughts are being contemplated.

Image: Human Brain Mapping/CMU

After accessing the mental triggers for 239 sentences, the program was able to predict a 240th phrase with 87% accuracy.

Marcel Just, who is leading the research, said: “Our method overcomes the unfortunate property of fMRI to smear together the signals emanating from brain events that occur close together in time, like the reading of two successive words in a sentence.

“This advance makes it possible for the first time to decode thoughts containing several concepts. That’s what most human thoughts are composed of.”

Ethical challenges

The outcomes of such research promise much that could benefit humanity. The developments show we have come a long way since the fictional Professor Quatermass used a mind-reading machine to interpret the thoughts of Martians.

Yes, there are fears we could develop killing machines that operate at the speed of human thought, but equally such advances could help those without the powers of speech or movement, and speed up multilingual translations, without the need for electrodes to be implanted in people’s heads.

Many, including serial tech entrepreneur Elon Musk, are excited by the opportunities such technologies could bring to the lives of people with disabilities, but researchers and governments have yet to spell out how they can ensure these are used to benefit the human race rather than harm it.

And, despite rapid developments here and in related areas such as gene editing and incorporating humans with computers, we are no nearer a global agreement of what ethical and moral standards are needed in this brave new world.

Have you read?
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Neuroscience

Related topics:
Emerging TechnologiesFourth Industrial Revolution
Share:
The Big Picture
Explore and monitor how Neuroscience is affecting economies, industries and global issues
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.

Subscribe today

Here’s why it’s important to build long-term cryptographic resilience

Michele Mosca and Donna Dodson

December 20, 2024

How digital platforms and AI are empowering individual investors

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum