THESIS
2020
xiv, 170 pages : color illustrations ; 30 cm
Abstract
In this thesis, we will talk about some non-convex analysis applied on the low-rank
matrix related problems via the Riemannian optimization. In Chapter 2, we
will develop tools to provide theoretical guarantee for the asymptotic escape from
strict saddle points and saddle sets when using the Riemannian gradient descent.
In Chapter 3, we will establish fast and near optimal convergence theory for a
class of low-rank matrix recovery problems by using the Riemannain gradient
descent and with random initializaiton. In Chapter 4, we will talk about how
to shape a landscape of a non-convex loss function by using an active function.
This will help with the concentration and push the sampling requirements to be
optimal, while it also keeps the well-behaved property that all local minim...[
Read more ]
In this thesis, we will talk about some non-convex analysis applied on the low-rank
matrix related problems via the Riemannian optimization. In Chapter 2, we
will develop tools to provide theoretical guarantee for the asymptotic escape from
strict saddle points and saddle sets when using the Riemannian gradient descent.
In Chapter 3, we will establish fast and near optimal convergence theory for a
class of low-rank matrix recovery problems by using the Riemannain gradient
descent and with random initializaiton. In Chapter 4, we will talk about how
to shape a landscape of a non-convex loss function by using an active function.
This will help with the concentration and push the sampling requirements to be
optimal, while it also keeps the well-behaved property that all local minima are
global minima and all saddles are strict.
Post a Comment