THESIS
2020
xii, 123 pages : illustrations ; 30 cm
Abstract
To better interact with robots, human robot interfaces should decode human intent reliably. There are multiple possible communication channels for such interfaces. In this thesis, we seek to develop hybrid interfaces for human-robot interaction, focusing on integrating cues from electroencephalography (EEG), eye gaze, force, and the environment.
First, we describe a hybrid EEG/gaze-based brain computer interaction system. Past work has shown that it is possible to use motor imagery to decode the subject's voluntary intent. However, system accuracy is limited by the low signal-to-noise ratio of EEG signals. We investigated combining motor imagery with eye gaze to improve system performance. We demonstrated a hybrid interface for a robot arm that enables subjects to perform a pick and pl...[
Read more ]
To better interact with robots, human robot interfaces should decode human intent reliably. There are multiple possible communication channels for such interfaces. In this thesis, we seek to develop hybrid interfaces for human-robot interaction, focusing on integrating cues from electroencephalography (EEG), eye gaze, force, and the environment.
First, we describe a hybrid EEG/gaze-based brain computer interaction system. Past work has shown that it is possible to use motor imagery to decode the subject's voluntary intent. However, system accuracy is limited by the low signal-to-noise ratio of EEG signals. We investigated combining motor imagery with eye gaze to improve system performance. We demonstrated a hybrid interface for a robot arm that enables subjects to perform a pick and place task. We found that the integration of EEG with eye gaze significantly improved system performance over either cue in isolation.
Second, we addressed the problem of estimating 3D gaze location in a world-centric coordinate system. The key challenge in 3D gaze tracking is to estimate the depth along the line of sight. We solved this problem by integrating the gaze estimates with information about the environmental structure represented by a point-cloud representation from an RGBD camera. We implemented the algorithm on both remote and head-mounted eye trackers. We tested the proposed system in a human-human collaborative assembly task.
Next, we developed a gaze-based upper limb rehabilitation system. Unlike previous works focusing mainly on the mechanical design, we proposed a gaze-modulated admittance control strategy that integrates gaze estimates with force estimates. Our system has the flexibility to be configured into different working modes to fit patient needs at different injury stages: a passive mode for the acute phase, an assistive mode for the sub-acute phase, and an active mode for the chronic phase. To keep the patient engaged, we designed a fishing game and implemented the algorithm on a planar robot. We also developed an immersive gaze-controlled painting task in virtual reality.
Finally, we investigated using single-trial EEG signal to distinguish between target and non-target faces presented in a rapid serial visual presentation paradigm. Unlike the past work on event-related-potentials (ERP) detection, which averaged multiple trials, here we focused on the more challenging task of detecting the target using a single-trial EEG signal. We used a convolutional neural network for classification, resulting in performance surpassing that of the support vector machine (SVM) algorithm, which has been commonly used in ERP detection tasks.
Post a Comment