This is how humans have learned to used tools to solve problems
Human intuition and experience has told us a book can keep a table steady. Image: Unsplash/Cesar Carlevarino Aragon
- Researchers at MIT’s Center for Brains, Minds and Machines have discovered that humans need 3 critical capabilities to help solve physical problems.
- They include prior knowledge from a similar situation, the ability to imagine the effects of their actions, and a way to rapidly update strategy when they fail.
- The team designed a novel task, the Virtual Tools game, that taps into tool-use abilities - and tried it on humans as well as an AI model.
Human beings are naturally creative tool users. When we need to drive in a nail but don’t have a hammer, we easily realize that we can use a heavy, flat object like a rock in its place. When our table is shaky, we quickly find that we can put a stack of paper under the table leg to stabilize it. But while these actions seem so natural to us, they are believed to be a hallmark of great intelligence — only a few other species use objects in novel ways to solve their problems, and none can do so as flexibly as people. What provides us with these powerful capabilities for using objects in this way?
In a new paper published in the Proceedings of the National Academy of Sciences describing work conducted at MIT’s Center for Brains, Minds and Machines, researchers Kelsey Allen, Kevin Smith, and Joshua Tenenbaum study the cognitive components that underlie this sort of improvised tool use. They designed a novel task, the Virtual Tools game, that taps into tool-use abilities: People must select one object from a set of “tools” that they can place in a two-dimensional, computerized scene to accomplish a goal, such as getting a ball into a certain container. Solving the puzzles in this game requires reasoning about a number of physical principles, including launching, blocking, or supporting objects.
The team hypothesized that there are three capabilities that people rely on to solve these puzzles: a prior belief that guides people’s actions toward those that will make a difference in the scene, the ability to imagine the effect of their actions, and a mechanism to quickly update their beliefs about what actions are likely to provide a solution. They built a model that instantiated these principles, called the “Sample, Simulate, Update,” or “SSUP,” model, and had it play the same game as people. They found that SSUP solved each puzzle at similar rates and in similar ways as people did. On the other hand, a popular deep learning model that could play Atari games well but did not have the same object and physical structures was unable to generalize its knowledge to puzzles it was not directly trained on.
This research provides a new framework for studying and formalizing the cognition that supports human tool use. The team hopes to extend this framework to not just study tool use, but also how people can create innovative new tools for new problems, and how humans transmit this information to build from simple physical tools to complex objects like computers or airplanes that are now part of our daily lives.
Kelsey Allen, a PhD student in the Computational Cognitive Science Lab at MIT, is excited about how the Virtual Tools game might support other cognitive scientists interested in tool use: “There is just so much more to explore in this domain. We have already started collaborating with researchers across multiple different institutions on projects ranging from studying what it means for games to be fun, to studying how embodiment affects disembodied physical reasoning. I hope that others in the cognitive science community will use the game as a tool to better understand how physical models interact with decision-making and planning.”
Joshua Tenenbaum, professor of computational cognitive science at MIT, sees this work as a step toward understanding not only an important aspect of human cognition and culture, but also how to build more human-like forms of intelligence in machines. “Artificial Intelligence researchers have been very excited about the potential for reinforcement learning (RL) algorithms to learn from trial-and-error experience, as humans do, but the real trial-and-error learning that humans benefit from unfolds over just a handful of trials — not millions or billions of experiences, as in today’s RL systems,” Tenenbaum says. “The Virtual Tools game allows us to study this very rapid and much more natural form of trial-and-error learning in humans, and the fact that the SSUP model is able to capture the fast learning dynamics we see in humans suggests it may also point the way towards new AI approaches to RL that can learn from their successes, their failures, and their near misses as quickly and as flexibly as people do.”
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Future of Computing
Forum Stories newsletter
Bringing you weekly curated insights and analysis on the global issues that matter.
More on Emerging TechnologiesSee all
Michele Mosca and Donna Dodson
December 20, 2024