THESIS
2018
xi, 62 pages : illustrations ; 30 cm
Abstract
Simultaneous localization and mapping (SLAM), serving as a fundamental technology
in various areas such as robotics, autonomous driving, augmented reality (AR), etc., has
been investigated in the past decades, yet it remains challenging in terms of robustness.
While recent trend of fusing visual and inertial information via nonlinear optimization
has demonstrated impressive performance, monocular optimization-based Visual-Inertial
Navigation System (VINS) still suffers from failure cases especially with consumer-level
sensors, as well as high computation complexity.
In this thesis, we start by implementing the monocular VINS based on the Multi State
Constraint Kalman Filter (MSCKF), followed by various extensions, in terms of extrinsic
calibration, observability constraint, han...[
Read more ]
Simultaneous localization and mapping (SLAM), serving as a fundamental technology
in various areas such as robotics, autonomous driving, augmented reality (AR), etc., has
been investigated in the past decades, yet it remains challenging in terms of robustness.
While recent trend of fusing visual and inertial information via nonlinear optimization
has demonstrated impressive performance, monocular optimization-based Visual-Inertial
Navigation System (VINS) still suffers from failure cases especially with consumer-level
sensors, as well as high computation complexity.
In this thesis, we start by implementing the monocular VINS based on the Multi State
Constraint Kalman Filter (MSCKF), followed by various extensions, in terms of extrinsic
calibration, observability constraint, handling the degraded motion, etc., leading to more
robust and practical solutions. We further extend the monocular MSCKF based VINS
to stereo and RGB-D cameras which benefits the robustness due to the multiple sensors
fusion.
Extensive experiments reflect that the MSCKF based VINS achieves competitive performance
and robustness compared with state of the art optimization-based VINS, while
maintaining much lower computational complexity. We conclude by disclosing the potential
of the proposed system to fuse more sensor sources such as laser, wheel odometer and
sonar sensor, to incorporate functional modules such as pose graph optimization, loop
closure and ultra-robust visual front end via deep learning framework.
Post a Comment