THESIS
2015
xx, 212 pages : illustrations ; 30 cm
Abstract
Efficient resource allocation plays a critical role in wireless communication systems. Many
resource allocation problems can be formulated into constrained optimization problems, and
when the problem is convex, there are many efficient iterative algorithms to find the optimal
solution that yields the desired control policy. In the literature, most of the convergence
results of these algorithms were established under the assumption that all the problem specific
parameters are time invariant. However, such assumption is unrealistic in wireless systems,
because the communication systems usually operate in a time-varying environment, where
the system parameters may be changing in a very short timescale. Moreover, message passing
is usually involved at each iteration among wireless n...[
Read more ]
Efficient resource allocation plays a critical role in wireless communication systems. Many
resource allocation problems can be formulated into constrained optimization problems, and
when the problem is convex, there are many efficient iterative algorithms to find the optimal
solution that yields the desired control policy. In the literature, most of the convergence
results of these algorithms were established under the assumption that all the problem specific
parameters are time invariant. However, such assumption is unrealistic in wireless systems,
because the communication systems usually operate in a time-varying environment, where
the system parameters may be changing in a very short timescale. Moreover, message passing
is usually involved at each iteration among wireless nodes. As a result, the iterative algorithms
may not be able to converge fast enough to catch up with the effects of parameter variations.
Since the algorithm convergence error may significantly affect the system performance, it is
highly important to understand the convergence behavior of iterative algorithms under time-varying
parameters.
In this thesis, a general convergence analysis framework for gradient-based iterative algorithms
under time-varying parameters is developed. The dynamics of the convergence errors
are modeled by a family of nonlinear virtual dynamical systems (VDS), and the VDS can
be viewed as an asymptotically stable dynamical system being disturbed by exogenous force
due to by parameter variations. It is shown that studying the algorithm convergence is equivalent
to studying the stability of the VDS. Following this result, various techniques from the
control theory are applied to study the VDS, and the algorithm convergence errors under time-varying
parameters are characterized. With the insight that stabilizing the VDS improves the
convergence performance, a family of adaptive compensation algorithms are derived. The
compensation algorithm demonstrates a significant performance advantage from both analytical
and numerical results.
Post a Comment