THESIS
2019
xiii, 94 pages : illustrations ; 30 cm
Abstract
3D motion correlation analysis can be used for state estimation in robotics. Due to a robust 3D
correlation quantification and analysis mechanism, the measurement of 3D motion correlation is
normalized, scaling invariant, and transformation invariant. The widely used optimization-based
solutions to state estimation rely on initial state values for accurate convergence. In contrast, our
correlation analysis-based solution is particularly suitable for certain state estimation cases without
initial guesses. Two typical applications based on 3D motion correlation analysis are demonstrated
in this thesis: 1) tracking 3D motion of dynamic objects based on monocular visual-inertial sensing
using minimizing motion correlation; 2) online temporal and spatial calibration of heterogeneous...[
Read more ]
3D motion correlation analysis can be used for state estimation in robotics. Due to a robust 3D
correlation quantification and analysis mechanism, the measurement of 3D motion correlation is
normalized, scaling invariant, and transformation invariant. The widely used optimization-based
solutions to state estimation rely on initial state values for accurate convergence. In contrast, our
correlation analysis-based solution is particularly suitable for certain state estimation cases without
initial guesses. Two typical applications based on 3D motion correlation analysis are demonstrated
in this thesis: 1) tracking 3D motion of dynamic objects based on monocular visual-inertial sensing
using minimizing motion correlation; 2) online temporal and spatial calibration of heterogeneous
sensors using maximizing motion correlation.
On one hand, we propose a 3D motion tracking method of dynamic objects based on monocular
visual-inertial sensing using minimizing motion correlation. Six degree-of-freedom (6-DoF)
visual tracking of dynamic objects is fundamental to a large variety of robotics and augmented reality
(AR) applications. A key to this problem is accurate distance measurement of dynamic objects,
which is usually obtained via stereo cameras, RGB-D sensors, or LiDARs. We address the problem
using only a monocular camera rigidly mounted with a low-cost inertial measurement unit (IMU).
This is a light-weight, small-size, and low-cost solution, which is particularly suitable for tracking
dynamic objects on drones or on mobile phones. Starting from a generic image-based 2D tracker,
we propose a novel method to resolve the object scale ambiguity in monocular vision in a geometric manner using minimizing motion correlation. This enables accurate metric 3D tracking of
arbitrary objects without requiring any prior knowledge about the object shape or size. We discuss
the applicability by analyzing the observability condition and degenerated cases for object scale recovery.
Simulation and real-world experimental results with ground truth comparison, along with
AR application examples, demonstrate the feasibility of the proposed 6-DoF tracking method.
On the other hand, we propose a unified temporal offset and extrinsic rotation estimation
method for heterogeneous sensors calibration using maximizing motion correlation. Accurate temporal
and spatial calibration is crucial to a multi-sensor fusion-based system. Many sensor calibration
approaches ignore temporal calibration that is in fact as important as spatial calibration.
Since fusion quality is more sensitive to extrinsic rotation than extrinsic translation, we focus on
temporal offset and extrinsic rotation estimation in this thesis. Most existing calibration methods
are specialized for a certain sensor combination, such as an IMU-camera or a camera-Lidar system.
However, heterogeneous multi-sensor fusion is a tendency in the robotics area, so a unified
calibration method is desired. To this end, we leverage ego-motion feature for calibration, and auxiliary
calibration boards are not needed since multiple odometry methods are available to capture
sensor ego-motion. Using a high-freq inertial measurement unit (IMU) as the calibration reference,
an IMU-centric scheme is designed to achieve a unified framework that adapts to various
target sensors. By combining independent IMU-centric calibration pairs, arbitrary two sensors using
the same reference IMU can also be calibrated. The temporal offset can be first estimated in
real-time, with a much larger estimation range compared to optimization-based methods. Given
temporally aligned sensor motion, the extrinsic rotation can be derived in closed-form in the same
3D motion correlation mechanism. Experimental results of certain sensor combinations show the
accuracy and robustness of the proposed method through comparison with state-of-the-art calibration
approaches, and the calibration result of a heterogeneous multi-sensor set demonstrates the
scalability and versatility of our method.
Post a Comment