THESIS
2017
xiv, 105 pages : illustrations ; 30 cm
Abstract
This thesis studies testing and scoring high-dimensional covariance matrices
when data exhibit heteroskedasticity. The observations are modeled as Y
i =
ω
iZ
i, where Z
i's are i.i.d. p-dimensional random vectors with mean 0 and covariance Σ, and ω
i's are positive random scalars reflecting heteroskedasticity.
The model is an extension of the elliptical distribution and can capture several
stylized facts of financial data including heteroskedasticity, heavy-tailedness and
asymmetry, etc..
Firstly, we aim to test H
0 : Σ ∝ Σ
0, in the high-dimensional setting where
both the dimension p and the sample size n grow to infinity proportionally. We
remove the heteroskedasticity by self-normalizing the observations, and establish
a CLT for the linear spectral statistic (LSS) of S̃
n := p/nΣ...[
Read more ]
This thesis studies testing and scoring high-dimensional covariance matrices
when data exhibit heteroskedasticity. The observations are modeled as Y
i =
ω
iZ
i, where Z
i's are i.i.d. p-dimensional random vectors with mean 0 and covariance Σ, and ω
i's are positive random scalars reflecting heteroskedasticity.
The model is an extension of the elliptical distribution and can capture several
stylized facts of financial data including heteroskedasticity, heavy-tailedness and
asymmetry, etc..
Firstly, we aim to test H
0 : Σ ∝ Σ
0, in the high-dimensional setting where
both the dimension p and the sample size n grow to infinity proportionally. We
remove the heteroskedasticity by self-normalizing the observations, and establish
a CLT for the linear spectral statistic (LSS) of S̃
n := p/nΣ
i=1nY
iY
iT/ │Y
i│
2 = p/nΣ
i=1nZ
iZ
iT/│Z
i│
2. The CLT is different from the existing ones for the LSS of
the usual sample covariance matrix S
n := 1/nΣ
i=1nZ
iZ
iT
([1], [2]). Our tests based
on the new CLT neither assume a specic parametric distribution nor involve
the fourth moment of Z
i. Numerical studies show that our tests work well even
when Z
i's are heavy-tailed.
Secondly, to evaluate the performance of different covariance matrix predictors,
we propose a scoring method in the heteroskedastic setting. Empirically, we use
our proposed scores to evaluate different covariance matrix predictors for the
returns of 72 stocks in the S&P 500 financials sector. The results show that: 1)
self-normalizing the observations can improve the prediction; 2) compared with
sparsity, an approximate factor model is more suitable for stock returns. These
results can be used to build better minimum-variance portfolios.
Post a Comment