THESIS
2017
xii, 128 pages : illustrations ; 30 cm
Abstract
In this thesis, we propose a novel framework, the Generative Adaptive Subspace Self
Organizing Map (GASSOM), which utilizes sparsity and temporal slowness in learning
invariant feature detectors. Sparsity and temporal slowness have been identified as two critical
components in shaping visual receptive fields of neurons in the primary visual cortex of
animals with a developed vision processing system, such as primates. Sparsity is inspired by
Barlow's efficient coding hypothesis, which posits that neural population responses represent
sensory data using as few active neurons as possible. The principle of temporal slowness
assumes that neurons adapt to encode information about the environment, which is relatively
stable in comparison to the raw sensory signals. Using the GASSOM f...[
Read more ]
In this thesis, we propose a novel framework, the Generative Adaptive Subspace Self
Organizing Map (GASSOM), which utilizes sparsity and temporal slowness in learning
invariant feature detectors. Sparsity and temporal slowness have been identified as two critical
components in shaping visual receptive fields of neurons in the primary visual cortex of
animals with a developed vision processing system, such as primates. Sparsity is inspired by
Barlow's efficient coding hypothesis, which posits that neural population responses represent
sensory data using as few active neurons as possible. The principle of temporal slowness
assumes that neurons adapt to encode information about the environment, which is relatively
stable in comparison to the raw sensory signals. Using the GASSOM framework we show that
temporal slowness can emerge in the model as it tries to learn a better representation of sensory
signals, and that incorporating slowness results in representations that exhibit better invariance.
We validate the applicability of the GASSOM framework in tasks that require the learning
of invariant visual representations. We incorporate the GASSOM in a framework that jointly
learns a neural representation and a behavior, and use it to analyze the functional utility of
sparsity. We also use this joint learning framework to explain neurophysiological findings
about binocular neurons and coordinated eye movements in rodents. We propose the
applicability of the GASSOM as a generic learning algorithm that could be used to form
hierarchical organizations of feature extractors that model the information flow in the visual
cortex. Specifically, we study the development of motion in depth sensitive units. Finally we
extend the GASSOM to the event domain, by constructing a framework for learning invariant
feature detectors from stimuli generated using event-driven neuromorphic vision sensors.
Post a Comment