THESIS
2021
1 online resource (xiv, 101 pages) : illustrations (some color), color maps
Abstract
Accurate localization is the most fundamental abilities of fully autonomous robots, including unmanned aerial vehicles (UAVs) [90, 46, 22], unmanned ground vehicles (UGVs) [121, 116, 98], and unmanned surface vehicles (USVs) [122, 109, 10]. Although numerous approaches have been adopted to achieve attractive performance on Simultaneous Localization and Mapping (SLAM) tasks in static and indoor scenarios, these tasks in large-scale and appearance-changing environments with robust performance are still challenging. For example, practical applications of mobile robots usually suffer from ineffective observations due to appearance changes, limitations of sensors, insufficient computational resources in large-scale environments, and accumulated drifts after long-term operation. To conquer th...[
Read more ]
Accurate localization is the most fundamental abilities of fully autonomous robots, including unmanned aerial vehicles (UAVs) [90, 46, 22], unmanned ground vehicles (UGVs) [121, 116, 98], and unmanned surface vehicles (USVs) [122, 109, 10]. Although numerous approaches have been adopted to achieve attractive performance on Simultaneous Localization and Mapping (SLAM) tasks in static and indoor scenarios, these tasks in large-scale and appearance-changing environments with robust performance are still challenging. For example, practical applications of mobile robots usually suffer from ineffective observations due to appearance changes, limitations of sensors, insufficient computational resources in large-scale environments, and accumulated drifts after long-term operation. To conquer these challenges, multiple sensors systems with sensor fusion can provide denser, higher frequency, and more dimensional measurements. Cameras, light detection and rangings (LiDARs), inertial measurement units (IMUs), and wheel encoders are common sensors for autonomous systems, especially for UGVs.
In this thesis, I propose sensor fusion-based state estimators and localization systems, especially in large-scale outdoor environments. The thesis is divided into visual-based and LiDAR-based chapters. In the visual-based localization chapter, an omnidirectional visual-inertial state estimator is firstly proposed. It adopts panoramic images and inertial measurements to achieve not only robust performance with robot pose estimation but also online calibration of multiple sensors, robot velocity, and sensor bias. Then, I propose a complete visual localization system in the large-scale outdoor port scene. It combines learning-based semantic segmentation results with the prior map to implement robust and high-accuracy localization. I also utilize the proposed visual state estimator to compensate for the wheel odometry. In the multi-LiDAR localization chapter, I start with an automatic multi-LiDAR calibration method. Motion-based and appearance-based calibration methods are utilized for a novel calibration performance without any extra sensors, calibration target, or prior knowledge about surroundings. Based on that, I introduce a LiDAR-based localization approach without a 3D prebuild map, which is also considered to be suitable in challenging port scenes. In this localization approach, I also introduce LiDAR-wheel-encoder odometry with a four-wheel steering model. By conducting a series of experiments both in simulated and real-world large-scale challenging environments, I show proposed approaches with robust and accurate performance in indoor and outdoor mobile robot localization scenarios.
Post a Comment