THESIS
2023
1 online resource (x, 53 pages) : illustrations (chiefly color)
Abstract
It has always been difficult to recreate effective and human-like locomotion or other
legged motions for bipedal robots. The state-of-the-art Generative Adversarial Imitation
Learning (GAIL) with Imitation from Observation (IfO) capability will be a suitable
framework to apply in tackling this challenge since the sources for imitation are mainly
state-only demonstrations. However, the common sources for these frameworks are either
expensive to set up or difficult to deliver adequate results without computationally
expensive preprocessing due to inaccuracy, it is often difficult to allow new or intricate
movements. Inspired by the human learning process of acquiring advanced knowledge
after mastering the basics, we here propose a Motion Capture (MoCap) assisted video
imitation learning f...[
Read more ]
It has always been difficult to recreate effective and human-like locomotion or other
legged motions for bipedal robots. The state-of-the-art Generative Adversarial Imitation
Learning (GAIL) with Imitation from Observation (IfO) capability will be a suitable
framework to apply in tackling this challenge since the sources for imitation are mainly
state-only demonstrations. However, the common sources for these frameworks are either
expensive to set up or difficult to deliver adequate results without computationally
expensive preprocessing due to inaccuracy, it is often difficult to allow new or intricate
movements. Inspired by the human learning process of acquiring advanced knowledge
after mastering the basics, we here propose a Motion Capture (MoCap) assisted video
imitation learning framework based on Adversarial Motion Priors (AMP) and Motion
capture-aided Video Imitation (MoVI). This framework will be able to produce smooth
and natural imitation results by combining MoCap data of base motion like walking with
2D and 3D pose data from in-the-wild monocular videos of the target motion like running.
With the MoCap data of simple actions combined with any motion video, this system
may generate a variety of life-like motions without the need for expensive datasets.
Post a Comment