THESIS
2020
xiv, 89 pages : illustrations ; 30 cm
Abstract
Living with paralysis is not a rare phenomenon. About 15% of the world's population
lives with some form of disability. Of those, 2-4% experience significant functional
impediments. Their daily activities are highly dependent on their caregivers. Thanks
to advances in electric and mechanical devices, the disabled now have tools to become
more independent. Among different control signals, gaze is becoming increasingly popular
due to its high efficiency, ease of use, applicability to different types of activities, and
robustness.
In this thesis, we describe systems that improve gaze-based interaction in both screen-based
and 3D environments. First, we describe an improved method for dwell-based gaze
typing using the user's past input history. We propose a probabilistic generative...[
Read more ]
Living with paralysis is not a rare phenomenon. About 15% of the world's population
lives with some form of disability. Of those, 2-4% experience significant functional
impediments. Their daily activities are highly dependent on their caregivers. Thanks
to advances in electric and mechanical devices, the disabled now have tools to become
more independent. Among different control signals, gaze is becoming increasingly popular
due to its high efficiency, ease of use, applicability to different types of activities, and
robustness.
In this thesis, we describe systems that improve gaze-based interaction in both screen-based
and 3D environments. First, we describe an improved method for dwell-based gaze
typing using the user's past input history. We propose a probabilistic generative model
for gaze, which enables us to incorporate gaze behavior and past input history through
Bayes' rule. We evaluate our model on both able-bodied subjects and subjects with a
spinal cord injury. Compared to the standard method, we achieved 41.8% and 49.5%
faster speed, in each population respectively.
Second, we describe a method to increase the robustness to head movement of remote
eye tracker by incorporating the interaction history. We use estimated gaze targets inferred
from the user's interaction to adjust the calibration parameters of the eye tracker
online. We reduce errors by up to 43% as the head moves over a 20 cm range.
Third, we describe a method for feeding robot control that combines gaze and push
button input. This combination not only speeds up the selection but, more importantly, reduces the number of movements required of the user.
Finally, to enable gaze-based interaction when people are mobile in the environment,
we propose a 3D gaze estimation algorithm using a mobile eye tracker. This algorithm
estimates 3D gaze in world coordinates by combining human pose estimated through
SLAM, 2D gaze vectors estimated by a head mounted eye tracker, and the 3D environment
context. We achieve a 2.8° accuracy over a 3:4 m × 4 m region.
Post a Comment