THESIS
2020
1 online resource (xii, 105 pages) : illustrations (chiefly color)
Abstract
Localization and mapping serve important roles for mobile robots. In the initially
unknown environments, simultaneous localization and mapping (SLAM) tracks the poses
of the robot and constructs the surrounding structure at the same time. When a pre-built
map is available, localization can be enhanced by the prior information from the
map to avoid suffering from the degeneration of estimation. Based on these observations,
with this thesis, I present a novel lidar mapping method aided by heterogeneous sensors,
followed by a monocular camera localization method utilizing the geometric information
from a surfel map, and a relocalization method estimating the initial camera pose in the
surfel map.
In the first part, a lidar-IMU fusion algorithm is proposed, where an optimization-based
tight...[
Read more ]
Localization and mapping serve important roles for mobile robots. In the initially
unknown environments, simultaneous localization and mapping (SLAM) tracks the poses
of the robot and constructs the surrounding structure at the same time. When a pre-built
map is available, localization can be enhanced by the prior information from the
map to avoid suffering from the degeneration of estimation. Based on these observations,
with this thesis, I present a novel lidar mapping method aided by heterogeneous sensors,
followed by a monocular camera localization method utilizing the geometric information
from a surfel map, and a relocalization method estimating the initial camera pose in the
surfel map.
In the first part, a lidar-IMU fusion algorithm is proposed, where an optimization-based
tightly-coupled lidar-IMU odometry, and a rotationally constrained mapping are
presented. The algorithm combines the advantages of these two sensors to achieve robust
performance even under challenging environments. In the second part, I consider
monocular camera localization in a pre-built lidar surfel map. The geometric information
from the map is leveraged to formulate the global homography constraints in a direct
photometric fashion. The global constraints make the monocular camera system aware
of the absolute scale and global pose in the map. Lastly, I present a visual relocalization
approach connecting the above lidar-based mapping and camera-based localization. The
geometric information from the lidar surfel map is utilized to form a learned descriptor-based
visual database, which provides metric relocalization estimation consistent with the
3D environment. Overall, this thesis shows a thorough work going from efficient sensor
fusion-based lidar mapping to cross-modality visual localization with map reuse.
Post a Comment