Developmental Robotics: From Babies to Robots (Intelligent Robotics and Autonomous Agents series) by Angelo Cangelosi & Matthew Schlesinger & Linda B. Smith

Developmental Robotics: From Babies to Robots (Intelligent Robotics and Autonomous Agents series) by Angelo Cangelosi & Matthew Schlesinger & Linda B. Smith

Author:Angelo Cangelosi & Matthew Schlesinger & Linda B. Smith [Cangelosi, Angelo]
Language: eng
Format: epub, azw3
ISBN: 9780262028011
Publisher: The MIT Press
Published: 2015-01-22T22:00:00+00:00


Figure 6.4

Experimental setup for Nagai, Hosoda, and Asada (2003) experiment. Figure courtesy of Yukie Nagai.

The experimental setup consists of a robot head with two cameras, which rotate on the pan and the tilt axes, and a human caregiver with various salient objects (figure 6.4). In each trial, the objects are randomly placed, and the caregiver looks at one of them. She changes the object to be gazed at every trial. The robot first has to look at the caregiver by detecting her face through template matching and extracting a face image. It then locates the salient, bright-colored objects by using thresholds in color space.

Through the cognitive architecture as in figure 6.5, the robot uses its camera image and the angle of the camera position as inputs, to produce in output a motor command to rotate the camera eyes. The architecture includes the visual attention module, which uses the salient feature detectors (color, edge, motion, and face detectors) and the visual feedback controller to move the head toward the salient object in the robot’s view. The self-evaluation module has a learning module based on a feedforward neural network and the internal evaluator. The internal evaluator gauges the success of the gaze behavior (i.e., whether there is an object at the center of the image, regardless of the success or failure of joint attention) and the neural network learns the sensorimotor coordination between face image and current head position, and the desired motor rotation signal. The gate module selects between outputs from the visual feedback controller and the learning module. This uses a selection rate that is designed to choose mainly the output of the attention module at the beginning of the training, and then, as learning advances, gradually select the learning module’s output. The selection rate uses a sigmoid function to model the nonlinear developmental trajectory between bottom-up visual attention and top-down learned behavior.



Download



Copyright Disclaimer:
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us, we'll remove relevant links or contents immediately.