THESIS
2021
1 online resource (ix, 49 pages) : illustrations (chiefly color)
Abstract
Grid cells in the entorhinal cortex exhibit hexagonal spatial firing patterns, which are critical
to mammalian navigation. The renaissance of deep learning evokes the study of grid
pattern formation by training recurrent neural networks (RNN), while the underlying
mechanism is still unclear. In this thesis, we aim to build connections between the RNN
and classical model—continuous attractor neural networks (CANN). By simplifying the
RNN architecture and comparing it with the CANN, we show that such models are unified
from a band-pass filter perspective. Applying the theory, we manage to build a minimum
model for grid pattern formation.
On the experimental side, we first train the RNN with different settings to verify our
claim. The error stabilization phenomenon and generalization failu...[
Read more ]
Grid cells in the entorhinal cortex exhibit hexagonal spatial firing patterns, which are critical
to mammalian navigation. The renaissance of deep learning evokes the study of grid
pattern formation by training recurrent neural networks (RNN), while the underlying
mechanism is still unclear. In this thesis, we aim to build connections between the RNN
and classical model—continuous attractor neural networks (CANN). By simplifying the
RNN architecture and comparing it with the CANN, we show that such models are unified
from a band-pass filter perspective. Applying the theory, we manage to build a minimum
model for grid pattern formation.
On the experimental side, we first train the RNN with different settings to verify our
claim. The error stabilization phenomenon and generalization failure are discovered in
the RNN model. By a joystick visualization, we identify that both phenomena attribute
to the persistent grid activities near the border regions, revealing the distinct boundary
dynamics between the attractor and recurrent neural networks.
Post a Comment