How does the brain learn to make sense of the visual world?
Image: A woman walks past a display of a brain slice of patient "H.M." at the press preview for the MIT. REUTERS/Brian Snyder.
The question is straightforward enough: How does the brain learn to make sense of the visual world? The full answer is complicated by the fact that infants can’t talk about what they’re taking in. Pawan Sinha seems to have bridged this gap. Through his research, and with the help of making art, the professor of vision and computational neuroscience works with children who have gained sight after a lifetime of blindness, and from them comes data on how the brain immediately starts learning.
The findings, he says, have multiple applications. They cross over into his work on autism, with the hope of a breakthrough in how cognitive issues are diagnosed and treated. They also have the potential to influence learning technology, creating machines that are both more dynamic and efficient, whether it’s using video for training vision systems, making automated face recognizers or ensuring industrial quality control.
No need for difficulty
One processing question that Sinha looks at is how a person is still able to recognize an object when it can vary in appearance on separate occasions. A long-held scientific view was that extracting fine details was crucial for recognition, and much of machine vision was built on this idea, Sinha says.
The challenge with these systems has been their operational brittleness; they often lack robustness. In Sinha’s research, he discovered a more parsimonious encoding strategy — the brain appears to grab onto coarse information and discards smaller, unnecessary detail. As he says, rather than trying to determine precisely where a fine edge is in an image and exactly how strong it is, many neurons seem to just care about the coarse placement of large regions and a similarly coarse assessment of their brightness relationships. The finding led to an attitude shift in Sinha’s thinking. “The brain may adopt fairly simple strategies to answer seemingly complex questions,” he says.
With this approach, Sinha created vision systems for tasks such as face detection and industrial inspection that he says are both lightweight and strong for performing in challenging real-world settings. A camera could be set up, trained on an object, and the technology could detect new instances of the object or its flaws. The computational simplicity of the approach enables the system to work in real-time; in a production setting, anything slower wouldn’t be acceptable, he says.
Another possibility would allow a person to take a picture of a product with a cellphone camera and a database would identify similar looking objects. This would, for instance, allow a person to search for products pictured in a magazine or find other products such as the ones in a store. It still needs development, but, once completed, “there would be many interesting applications. Patterns and objects in the real world would, in effect, become ‘hyper-links’ to access a variety of related information,” Sinha says.
Science on the receiving end
In 2005, as an outgrowth of his vision research, Sinha started Project Prakash, “light” in Sanskrit. The goal was to go into remote areas of India and treat children who had been blind since birth. Over 40,000 children have been screened; over 450 have gained sight through surgery. It began as a humanitarian effort, but Sinha says that fortuitously, it ended up producing valuable scientific information. Since the children started seeing as soon as bandages were removed, Sinha and his students could study how the brain developed with the onset of sight.
The only similar kind of dynamic is with a newborn, but the limitation is that babies can’t give complex feedback. A 10-year-old Prakash child could. “For neuroscience, this is like a goldmine of data,” Sinha says. The “gateway result” was that even in the face of prolonged deprivation, the brain retains significant plasticity to reorganize quickly and learn.
Following up on this result, Sinha says he discovered that a key element in the learning equation is dynamic information. The brain can and does learn from static images, but movement speeds up and simplifies the otherwise complex process, by highlighting the aspects of the visual world that go together and those that need to be segregated. “You put the world in motion and it’s as if a magical switch goes off,” he says.
The discovery opens up potential applications. Rather than using thousands of images, as has often been the case in computer vision systems, a few minutes of video may produce equivalent results for both people and machines. Not only is it more effective, Sinha says, but also people and companies don’t need the luxury of time in order to achieve useful results.
Better understanding autism
The Project Prakash findings overlap and influence Sinha’s work on autism. At root is a processing issue. Newly sighted and autistic children both focus on the details of an object; their brains seem to over-fragment their visual field. Instead of perceiving the overall gestalts, the children tend to focus on the local bits and pieces. The difference is that the children with autism appear to stay with this bias while the Prakash children grow out of it, Sinha says.
Again, motion may be key. Sinha’s hypothesis is that children with autism may have a difficulty anticipating events in dynamic settings. Sinha says the eventual findings could change the approach to language and social interaction processing in autism and lead to better diagnosis and treatment. “If we do validate the theory, then we would have advanced our understanding of one of the great riddles in brain science,” he says.
While he gathers data, Sinha says that he has already received a certain validation, from parents. Scientists can work with children, but the time for such interactions is limited, so the picture is a “small vignette” at best, compromised by numerous practical constraints. Parents are 24-hour observers, and, from their vantage point, Sinha says that they see merit in and a basis for his theory.
Painting in colors
Since he was a child, Sinha has drawn and painted, and he uses art in his work. The questions are the same: How does the brain recognize and then communicate an image? In India, after the Prakash children had gained sight, Sinha observed that they were shy and withdrawn, with limited opportunities to socialize. His group has developed an activity called UnrulyArt, where kids are free to play with colors and splatter at will.
The effects are multifold, he says. The children become more outwardly engaged. They also produce beautiful pictures, which raises their self-confidence. Adults admire their work, adding to the boost. And the parents see that the outside world views their children in a new light. Buoyed by that success, he is also conducting UnrulyArt sessions with special needs children in the United States.
Sinha says that he doesn’t exactly know why art helps children become more verbal. It could be the forced interaction of a project. He says that he might eventually prove his hypothesis, but this is one instance where formal data aren’t necessary. “Even if we never know how art has that beneficial impact, I think it’s an activity worth undertaking,” he says.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Neuroscience
Forum Stories newsletter
Bringing you weekly curated insights and analysis on the global issues that matter.