THESIS
2018
xii, 91, that is, xiv, 91 pages : illustrations ; 30 cm
Abstract
Using computer vision data to guide a robot’s movement is fundamentally interesting to a roboticist.
Two approaches exist in using the camera input to guide the action of a robot: either separately
developed the localization, tracking, and planning pipelines, or coupling computer vision data in
the servo loop.
In this thesis we will focus on the latter approach, directly using the features from the detection
algorithm to guide the control of a dual axial gimbal, and then extended to the complete platform.
Related estimation, frequency analysis, and localization are also explored for better visual servo
control performance.
In this thesis, we explored the path to build up the complete visual servo system. The challenges
and the solutions are listed in a chronical order, where a...[
Read more ]
Using computer vision data to guide a robot’s movement is fundamentally interesting to a roboticist.
Two approaches exist in using the camera input to guide the action of a robot: either separately
developed the localization, tracking, and planning pipelines, or coupling computer vision data in
the servo loop.
In this thesis we will focus on the latter approach, directly using the features from the detection
algorithm to guide the control of a dual axial gimbal, and then extended to the complete platform.
Related estimation, frequency analysis, and localization are also explored for better visual servo
control performance.
In this thesis, we explored the path to build up the complete visual servo system. The challenges
and the solutions are listed in a chronical order, where a one-degree-of-freedom, two-degree-of-freedom,
and four-degree-of-freedom schemes are introduced in order. Two archetypal visual servo
schemes, position-based visual servo control (PBVS) and image-based visual servo control (IBVS)
are tested and compared. The basic methods are introduced upfront, and the advanced topics as the
velocity estimator, frequency domain analysis, and the other fine-tune methods are introduced in
the later chapters.
We show that image-based visual servo control performs much robust under a weak visual
frontend. Multiple fine-tune methods are used, and all algorithm are running onboard in real-time
in an indoor environment.
Post a Comment