THESIS
2016
xvi, 101 pages : illustrations ; 30 cm
Abstract
Sensorimotor integration is the coupling between the sensory and motor systems. In
robotics, it traditionally relies upon manual calibration. Research in neuroscience and psychology
suggests that sensorimotor integration is mediated by the central nervous system and develops
starting in infancy. With the inspiration, this thesis investigates a developmental approach to
robotic sensorimotor integration. Specifically, we propose algorithms that enable a robot to learn
visuomotor coordination during the action-perception cycle.
First, we studied the problem of learning robot self-identification. A set of local linear
predictors are trained to predict visual motion feature from a robot’s motor command. By
measuring the predictability of the predictors, the robot is able to visua...[
Read more ]
Sensorimotor integration is the coupling between the sensory and motor systems. In
robotics, it traditionally relies upon manual calibration. Research in neuroscience and psychology
suggests that sensorimotor integration is mediated by the central nervous system and develops
starting in infancy. With the inspiration, this thesis investigates a developmental approach to
robotic sensorimotor integration. Specifically, we propose algorithms that enable a robot to learn
visuomotor coordination during the action-perception cycle.
First, we studied the problem of learning robot self-identification. A set of local linear
predictors are trained to predict visual motion feature from a robot’s motor command. By
measuring the predictability of the predictors, the robot is able to visually identify its own body
in a cluttered environment with motion noise.
Second, we investigated the problem of learning visuomotor transformations. We first
proposed an efficient neural population coding algorithm to improve the performance of robot
head to arm coordinate transformation learning. We then extended the point-marker-based end
effector representation to a planar-region-based representation, where the visuomotor
transformation is learned by maximizing spatiotemporally local image consistency. Experiments demonstrate that shape of the region can adapt to the actual shape of the robot arm end effector.
Finally, we integrated robot self-body identification, structure inference and kinematic
model learning into a unified framework. Raw kinematic models are represented by a set of rigid
transformations and are learned by a stochastic-gradient-decent-based iterative closest point
algorithm using point cloud data collected by depth sensors. The structure is segmented
according to a minimum matching error criterion with spatial continuity constraints. The points
corresponding to the robot body and the relationship between the joints can be inferred from the
learned transformations. Our experiments show that the learned kinematic model can be used to
guide the robot arm to reach a selected target.
Post a Comment