UW Roboticists Learn to Teach Robots from Babies
December 2, 2015 | University of WashingtonEstimated reading time: 5 minutes
“Babies engage in what looks like mindless play, but this enables future learning. It’s a baby’s secret sauce for innovation,” Meltzoff said. “If they’re trying to figure out how to work a new toy, they’re actually using knowledge they gained by playing with other toys. During play they’re learning a mental model of how their actions cause changes in the world. And once you have that model you can begin to solve novel problems and start to predict someone else’s intentions.”
Rao’s team used that infant research to develop machine learning algorithms that allow a robot to explore how its own actions result in different outcomes. Then the robot uses that learned probabilistic model to infer what a human wants it to do and complete the task, and even to “ask” for help if it’s not certain it can.
The team tested its robotic model in two different scenarios: a computer simulation experiment in which a robot learns to follow a human’s gaze, and another experiment in which an actual robot learns to imitate human actions involving moving toy food objects to different areas on a tabletop.
This robot used the new UW model to imitate a human moving toy food objects around a tabletop. By learning which actions worked best with its own geometry, the robot could use different means to achieve the same goal — a key to enabling robots to learn through imitation.University of Washington
In the gaze experiment, the robot learns a model of its own head movements and assumes that the human’s head is governed by the same rules. The robot tracks the beginning and ending points of a human’s head movements as the human looks across the room and uses that information to figure out where the person is looking. The robot then uses its learned model of head movements to fixate on the same location as the human.
The team also recreated one of Meltzoff’s tests that showed infants who had experience with visual barriers and blindfolds weren’t interested in looking where a blindfolded adult was looking, because they understood the person couldn’t actually see. Once the team enabled the robot to “learn” what the consequences of being blindfolded were, it no longer followed the human’s head movement to look at the same spot.
“Babies use their own self-experience to interpret the behavior of others — and so did our robot,” said Meltzoff.
In the second experiment, the team allowed a robot to experiment with pushing or picking up different objects and moving them around a tabletop. The robot used that model to imitate a human who moved objects around or cleared everything off the tabletop. Rather than rigidly mimicking the human action each time, the robot sometimes used different means to achieve the same ends.
Page 2 of 3
Suggested Items
DARPA Selects Cerebras to Deliver Next Generation, Real-Time Compute Platform for Advanced Military and Commercial Applications
04/08/2025 | RanovusCerebras Systems, the pioneer in accelerating generative AI, has been awarded a new contract from the Defense Advanced Research Projects Agency (DARPA), for the development of a state-of-the-art high-performance computing system. The Cerebras system will combine the power of Cerebras’ wafer scale technology and Ranovus’ wafer scale co-packaged optics to deliver several orders of magnitude better compute performance at a fraction of the power draw.
Altair, JetZero Join Forces to Propel Aerospace Innovation
03/26/2025 | AltairAltair, a global leader in computational intelligence, and JetZero, a company dedicated to developing the world’s first commercial blended wing airplane, have joined forces to drive next-generation aerospace innovation.
RTX's Raytheon Receives Follow-on Contract from U.S. Army for Advanced Defense Analysis Solution
03/25/2025 | RTXRaytheon, an RTX business, has been awarded a follow-on contract from the U.S. Army Futures Command, Futures and Concepts Center to continue to utilize its Rapid Campaign Analysis and Demonstration Environment, or RCADE, modeling and simulation capability.
Ansys to Integrate NVIDIA Omniverse
03/20/2025 | ANSYSAnsys announced it will offer advanced data processing and visualization capabilities, powered by integrations with NVIDIA Omniverse within select products, starting with Fluent and AVxcelerate Sensors.
Altair Releases Altair HyperWorks 2025
02/19/2025 | AltairAltair, a global leader in computational intelligence, is thrilled to announce the release of Altair® HyperWorks® 2025, a best-in-class design and simulation platform for solving the world's most complex engineering challenges.