Autonomous robots have garnered increasing attention from the architecture, engineering,
construction, and facility management (AEC/FM) industries for their potential to enhance
construction monitoring and inspection. Traditionally, these tasks are performed by
construction workers using human eyes or measurement devices such as cameras and laser
scanners. However, these methods can be time-consuming and prone to insufficient or
redundant coverage due to the worker's experience-based decision-making. Mobile
construction robots, such as Unmanned Aerial Vehicles (UAVs), wheeled Unmanned Ground
Vehicles (UGVs), and quadruped robots, can autonomously navigate through dangerous or
inaccessible areas, and carry intelligent sensors to locate themselves and collect high-quality
data for monito...[
Read more ]
Autonomous robots have garnered increasing attention from the architecture, engineering,
construction, and facility management (AEC/FM) industries for their potential to enhance
construction monitoring and inspection. Traditionally, these tasks are performed by
construction workers using human eyes or measurement devices such as cameras and laser
scanners. However, these methods can be time-consuming and prone to insufficient or
redundant coverage due to the worker's experience-based decision-making. Mobile
construction robots, such as Unmanned Aerial Vehicles (UAVs), wheeled Unmanned Ground
Vehicles (UGVs), and quadruped robots, can autonomously navigate through dangerous or
inaccessible areas, and carry intelligent sensors to locate themselves and collect high-quality
data for monitoring and inspection tasks.
Currently, there are still research gaps on the interoperability, performance, and feasibility
between robotics and construction domains: (1) little construction-related prior knowledge,
such as Building Information Model (BIM), has been utilized in the coverage planning of
construction robots to fully capture a construction site. (2) Traditional robot localization
techniques rely on purely geometric features and do not exploit object-level semantic information from deep learning (DL) approaches. (3) Existing robot localization and planning
systems suffer from drifts and failures, and their robustness, accuracy, and efficiency are to be
further improved for construction monitoring and inspection tasks. Therefore, this thesis
proposes three sub-topics that aim to address the key challenges in this domain, respectively.
The first part develops a BIM-supported framework to facilitate scan planning and motion
planning of autonomous LiDAR-carrying UAVs. The proposed framework selectively
integrates the geometry and semantics from BIM to construct a probabilistic 3D voxel map.
Then, a greedy algorithm is developed to iteratively generate waypoints with optimized
coverage. After that, a collision-free guiding path is computed for traversing all the waypoints
before it is further transformed into a high-degree polynomial trajectory. The proposed
framework was validated in a simulated construction scenario of water treatment facilities using
MATLAB and Unreal Engine 4 (UE4). The planned trajectory demonstrated smoothness,
energy efficiency, and sufficient coverage.
The second part proposes a pose graph re-localization framework that utilizes object-level
landmarks to enhance a traditional visual localization system. The proposed framework builds
an object landmark dictionary from BIM as prior knowledge. Then, a multi-modal Deep Neural
Network (DNN) is proposed to realize 3D object detection in real-time, followed by instance-level
object association with false positive rejection, and relative pose estimation with outlier
removal. Finally, a keyframe-based graph optimization is performed to rectify the drifts of
traditional visual localization. The proposed framework was validated using a mobile platform
with RGB-D and inertial sensors, and the test scene is an indoor office environment with
furnishing elements.
The third part proposes a navigation and localization framework specifically for
quadruped robots. The proposed approach first constructs synthetic maps from BIM and
converts them into robot-compatible formats for path planning. Then, the navigation maps are combined with multi-sensor Simultaneous Localization And Mapping (SLAM) to enhance
robustness and accuracy in localization and point cloud reconstruction. Further, a DL -based
object recognition model was trained to inspect building elements. The proposed framework
was validated on a quadruped robot with image, laser, and inertial sensors. The experiment was
carried out in an academic building environment.
The proposed thesis conducted three case studies on various platforms, including a
digital-twin-based simulation environment, a handheld sensor platform, and a mobile
quadruped robot. The experiment scenarios include complex industrial facilities and daily life
environments. The validation results exhibited significant improvements in planning and
localization accuracy, the ability of scene understanding, and robustness and efficiency. Overall,
this thesis demonstrated the potential of BIM and artificial intelligence (AI) in mobile
construction robots for monitoring and inspection of building facilities.
Post a Comment