THESIS
2016
xvi, 136 pages : illustrations ; 30 cm
Abstract
Covariance estimation has been a fundamental and long existing problem, closely related to
various fields including multi-antenna communication systems, social networks, bioinformatics,
and financial engineering. Classical estimators, although simple to construct, have been
criticized for their inaccurate estimation when the number of samples is small compared to
the variable dimension. In this thesis, we study the problem of improving estimation accuracy
by regularizing a covariance matrix based on prior information. Two types of regularization
methods are considered, namely, shrinking the raw estimator to a known target and imposing
a structural constraint on it.
Our study first focuses on the shrinkage covariance estimators with a zero mean. For a
family of estimators de...[
Read more ]
Covariance estimation has been a fundamental and long existing problem, closely related to
various fields including multi-antenna communication systems, social networks, bioinformatics,
and financial engineering. Classical estimators, although simple to construct, have been
criticized for their inaccurate estimation when the number of samples is small compared to
the variable dimension. In this thesis, we study the problem of improving estimation accuracy
by regularizing a covariance matrix based on prior information. Two types of regularization
methods are considered, namely, shrinking the raw estimator to a known target and imposing
a structural constraint on it.
Our study first focuses on the shrinkage covariance estimators with a zero mean. For a
family of estimators defined as the maximizer of a penalized likelihood function, sufficient
conditions their existence are established, which quantitatively reveal the number of required
samples is reduced for estimation. The condition is then particularized for two particular estimators,
where we show that it is also necessary. To compute the two estimators, numerical algorithms
are devised leveraging the majorization-minimization (MM) algorithm framework,
under which convergence is analyzed systematically. The problem is then extended to the
joint estimation of mean and covariance matrix.
For applications where the covariance matrix possesses a certain structure, we propose
estimating it by maximizing the data likelihood function under the prior structural constraint.
First, estimation with a general convex constraint is introduced, along with an efficient algorithm
for computing the estimator derived based on MM. Then, the algorithm is tailored to
several special structures that enjoy a wide range of applications in signal processing related
fields. In addition, two types of non-convex structures are also discussed. The algorithms
are proved to converge to a stationary point of the problems. Numerical results show that
the proposed estimators outperform the state of the art methods in the sense of achieving a
smaller estimation error at a lower computational cost.
Post a Comment