THESIS
2014
xvii, 125 pages : illustrations ; 30 cm
Abstract
The past few years have witnessed a marked growing heat over the couple of
visual and inertial sensors. Such popularity is gained not only because of its high
complementarities between inertial sensors and visual sensors in fundamental
motion state estimation, but also due to its advantageous characteristics such as
being low cost, low power-consumption, low weight and with small dimension.
All these features can be summarized as the capabilities to serve a low-cost,
compact, portable and intelligent robot platform. In particular, micro drones,
as one of these platforms, are successful representatives taking full advantage of
the visual-inertial sensing system on board.
In this thesis, we contribute both a theoretical analysis and a practical implementation
of an inertial-assi...[
Read more ]
The past few years have witnessed a marked growing heat over the couple of
visual and inertial sensors. Such popularity is gained not only because of its high
complementarities between inertial sensors and visual sensors in fundamental
motion state estimation, but also due to its advantageous characteristics such as
being low cost, low power-consumption, low weight and with small dimension.
All these features can be summarized as the capabilities to serve a low-cost,
compact, portable and intelligent robot platform. In particular, micro drones,
as one of these platforms, are successful representatives taking full advantage of
the visual-inertial sensing system on board.
In this thesis, we contribute both a theoretical analysis and a practical implementation
of an inertial-assisted visual sensing system. In the algorithm-level,
being different from the well-studied visual-inertial sensor fusion, we aim at
utilizing inertial sensor readings to simplify the fundamental motion estimation
and outlier rejection procedures. In the system-level, we try to work out
an embedded solution to the real-time localization and mapping systems for consumer drones, based on our fundamental algorithms. More specifically, the
proposed inertial and visual combined system achieves significant improvement
in the computational efficiency in fundamental pixel-level algorithms, including
the anti-distortion, the stereo rectification, the feature detection and the feature
description. On the other hand, based on our 2-point motion estimation algorithm
which can simultaneously estimate translational and yaw motion (with
given 2D-3D feature correspondences, as well as reliable pitch and roll behaviors
from an inertial sensing suite), we thereby design a simple inlier selection scheme
which picks the longest sequence of successive correspondences with motion consistency.
In the end, several experiments were conducted to demonstrate the
feasibility and robustness of our system in complex indoor and outdoor environments.
Post a Comment