This computer turns text into animation
The ultimate goal is to animate complex sequences with multiple actions happening either simultaneously or in sequence, Ahuja says. Image: Reuters/Jim Bourg
Scientists have made tremendous leaps in getting computers to understand natural language, as well as in generating a series of physical poses to create realistic animations. These capabilities might as well exist in separate worlds, however, because the link between natural language and physical poses has been missing.
The researchers are working to bring those worlds together using a neural architecture they call Joint Language-to-Pose, or JL2P. The JL2P model enables researchers to jointly embed sentences and physical motions, so it can learn how language is related to action, gestures, and movement.
“I think we’re in an early stage of this research, but from a modeling, artificial intelligence and theory perspective, it’s a very exciting moment,” says Louis-Philippe Morency, associate professor in the Language Technologies Institute at Carnegie Mellon University. “Right now, we’re talking about animating virtual characters. Eventually, this link between language and gestures could be applied to robots; we might be able to simply tell a personal assistant robot what we want it to do.
“We also could eventually go the other way—using this link between language and animation so a computer could describe what is happening in a video,” he adds.
To create JL2P, LTI PhD student Chaitanya Ahuja used a curriculum-learning approach that focuses on the model first learning short, easy sequences—”A person walks forward”—and then longer, harder sequences—”A person steps forward, then turns around and steps forward again,” or “A person jumps over an obstacle while running.”
Verbs and adverbs describe the action and its speed/acceleration, while nouns and adjectives describe locations and directions. The ultimate goal is to animate complex sequences with multiple actions happening either simultaneously or in sequence, Ahuja says. For now, the animations are for stick figures.
Making it more complicated is the fact that lots of things are happening at the same time, even in simple sequences, Morency explains.
“Synchrony between body parts is very important,” Morency says. “Every time you move your legs, you also move your arms, your torso, and possibly your head. The body animations need to coordinate these different components, while at the same time achieving complex actions. Bringing language narrative within this complex animation environment is both challenging and exciting. This is a path toward better understanding of speech and gestures.”
Ahuja will present the work at the International Conference on 3D Vision in Quebec City, Canada.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Artificial Intelligence
Related topics:
The Agenda Weekly
A weekly update of the most important issues driving the global agenda
You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.
More on Emerging TechnologiesSee all
David Elliott
November 25, 2024