THESIS
2020
xiii, 118 pages : illustrations ; 30 cm
Abstract
In this thesis, several algorithms are developed and analyzed for tensor related
problem, including tensor decomposition and tensor recovery.
A tensor is a multidimensional array as generalization of matrix. On the one
hand, data in real life is naturally high-order, for example, multichannel image,
video, information streaming, etc. On the other hand, tensor method has
wide range of applications, including latent variable models, neural network compression
and high-order moments estimation. Although residing in very high-dimensional
spaces, tensor of interest frequently has low-complexity structure.
The two commonly used structure is low CP rank and low multilinear rank.
We develop tools to analyze tensor efficiently with theoretical guarantees by exploiting
the low-rank stru...[
Read more ]
In this thesis, several algorithms are developed and analyzed for tensor related
problem, including tensor decomposition and tensor recovery.
A tensor is a multidimensional array as generalization of matrix. On the one
hand, data in real life is naturally high-order, for example, multichannel image,
video, information streaming, etc. On the other hand, tensor method has
wide range of applications, including latent variable models, neural network compression
and high-order moments estimation. Although residing in very high-dimensional
spaces, tensor of interest frequently has low-complexity structure.
The two commonly used structure is low CP rank and low multilinear rank.
We develop tools to analyze tensor efficiently with theoretical guarantees by exploiting
the low-rank structure. Three representative problems are studied as
summarized below:
● Two-stage algorithm for CP decomposition, which tries to simplify
original high-dimensional nonconvex problem then solve CP decomposition
for tensor efficiently. With the exist of noise for symmetric case, we
design an alternating method with warm initialization. Furthermore, we
empirically show the efficiency and robustness compared to greedy method.
● Tucker decomposition based Robust PCA, which studies how to decompose
a tensor into a low Tucker-rank tensor and a sparse tensor. We
develop an non convex algorithm which is both computationally efficient
and theoretical guaranteed. Unlike existing algorithms which design convex
relaxation of tensor rank and sparsity to solve the problem using convex
optimization similar to matrix RPCA, our method update low-rank tensor
and sparse tensor alternatively. And acceleration is achieved by first projecting
a tensor onto low dimensional subspace before obtaining a new
estimate of the low rank tensor via HOSVD.
● Optimal tensor recovery, which is to recover a low multilinear-rank
tensor from a small amount of linear measurements. Based on tensor RIP,
we propose a provable Riemannian gradient algorithm and achieve near-optimal
sample complexity with initialization by one step of iterative hard
thresholding.
Post a Comment