THESIS
2015
xix, 203 pages : illustrations ; 30 cm
Abstract
There is a surge of interest in efficient resource control for future wireless communication
and control systems. Different applications, such as video streaming, energy harvesting,
and plant stabilization and optimizations, pose different levels of challenges to
the resource control problem. There are many existing literatures on cross-layer resource
control to improve the physical layer performance, such as maximization of throughput
and SINR, and minimization of bit error rate and mean square error. However, these
solution frameworks cannot be extended to solve the above problems, because the optimization
objectives therein are the physical layer metrics which may not be directly
related to the application level performance metrics in different application scenarios.
Furthe...[
Read more ]
There is a surge of interest in efficient resource control for future wireless communication
and control systems. Different applications, such as video streaming, energy harvesting,
and plant stabilization and optimizations, pose different levels of challenges to
the resource control problem. There are many existing literatures on cross-layer resource
control to improve the physical layer performance, such as maximization of throughput
and SINR, and minimization of bit error rate and mean square error. However, these
solution frameworks cannot be extended to solve the above problems, because the optimization
objectives therein are the physical layer metrics which may not be directly
related to the application level performance metrics in different application scenarios.
Furthermore, the resulting control policy is adaptive to the channel state information
(CSI) only, which exploits good transmission opportunities from the time-varying physical
channels. For real-time multimedia streaming, dynamic control policy adaptive to
the instantaneous data queue length (DQSI) is also very important because it gives information
about the urgency of the data flows. For energy harvesting, dynamic control
policy should be adaptive to the instantaneous energy queue length (EQSI), which gives
information about the availability information of the renewable energy. For plant optimization,
dynamic control policy should be adaptive to the instantaneous plant state
information, which gives information about the urgency of delivering the plant state to
the remote controller.
In this thesis, we consider the following four problems: 1) multi-antenna transceiver
optimization for multimedia streaming in interference networks, 2) optimal beamforming
for video streaming in multi-antenna interference networks via diffusion limit, 3)
power control for energy harvesting wireless system with finite energy storage, and 4) power management for networked control systems over correlated wireless fading channels.
We formulate the associated stochastic optimization problems as infinite horizon
average cost Markov decision process (MDP) problems. It is well-known that conventional
solutions, such as value iteration and policy iteration algorithms, have exponential
complexity and give no design insights. To obtain low complexity and insightful
solutions, we propose a continue-time perturbation (CTP) approach by exploiting the
theories of dynamic programming and differential equations. Briefly speaking, CTP approach
offers a continuous-time approximation to the original discrete-time system and
we solve an approximate equivalent problem in the continuos-time domain to obtain
low complexity solutions. However, CTP approach only offers a framework for transforming
the original discrete-time problem into a continuous-time problem and it still is
a case-by-case problem for solving the associated multi-dimensional partial differential
equation (PDE) to truly solve the continuous-time problem. In this thesis, we apply the
CTP approach to solve the problems in the above 1), 3), 4) to obtain low complexity
and insightful solutions. Note that these three problems pose different challenges to
solve the PDE while using the CTP approach. Besides, we apply a diffusion approximation
to directly formulate the stochastic optimization in the continuous-time system
with dynamics of the form of stochastic differential equation. We apply this approach
to solve the above problem 2) to obtain a low complexity and insightful solution. For
each of the above four problems, we compare the performance of the proposed solution
with the state-of-the-art base policies, and show that significant performance gain can
be achieved with low complexity.
Post a Comment