THESIS
2017
xvi, 1, 171 pages : illustrations ; 30 cm
Abstract
The foundation of mobile robotic systems is accurate localization and dense mapping
of the perceived environment, which serves as the perception input for path planning and
obstacle avoidance. Lots of progress have been made to the problems of localization and
mapping over the last 30 years. These works are mainly on the theoretical aspect, such
as the problem definition, probabilistic formulation, observability, sparsity, convergence and
consistency. The focus of my works is, however, application-oriented. I aim to bridge the
gap between theory, research and practice. Key issues in real applications are resource
awareness, system robustness, task-driven perception, etc.. I balance the system performance
against available sensing and computational resources in all my works as it...[
Read more ]
The foundation of mobile robotic systems is accurate localization and dense mapping
of the perceived environment, which serves as the perception input for path planning and
obstacle avoidance. Lots of progress have been made to the problems of localization and
mapping over the last 30 years. These works are mainly on the theoretical aspect, such
as the problem definition, probabilistic formulation, observability, sparsity, convergence and
consistency. The focus of my works is, however, application-oriented. I aim to bridge the
gap between theory, research and practice. Key issues in real applications are resource
awareness, system robustness, task-driven perception, etc.. I balance the system performance
against available sensing and computational resources in all my works as it is the basic
requirement for online applications. In addition, to solve the issue of tracking robustness, I
firstly propose a dense visual-inertial fusion method that achieves stable performance even
under aggressive motions. I then relax the photo-consistency assumption by proposing an
edge alignment-based visual-inertial approach as well as relax the rigid baseline assumption
of stereo cameras by proposing an online markerless camera extrinsic calibration. While for
task-driven perception, I study the problem of real-time large-scale long-term mapping for
autonomy. Two mapping approaches are proposed. One is based on volumetric maps where
I interpret the world as connected tetrahedra with vertexes selected from sparse features in
the environment. The other is based on topological maps, whose nodes are robot poses and
edges are transformations between these poses. All the proposed methods are validated on
various datasets and through online real-world experiments. For the benefit of the society, I
release my implementations as open-source packages.
Post a Comment