THESIS
2013
xvi, 125 pages : illustrations ; 30 cm
Abstract
Network coding has emerged as an important technique for communication networks, as it
can not only potentially increase the throughput, but can also add a higher degree of robustness into the network. As a decentralized approach to achieve the benefits of network coding,
random linear network coding has aroused a lot of research interest as its implementation
does not require the knowledge of the topology of the network. However due to the randomness introduced, decoding failure in the implementation of the random linear network coding
becomes an important concern, as it leads to the inefficient usage of the network resources.
Moreover the encoding and decoding complexity of the random linear network coding has
also been a major issue in practice.
This thesis is dedicated to the...[
Read more ]
Network coding has emerged as an important technique for communication networks, as it
can not only potentially increase the throughput, but can also add a higher degree of robustness into the network. As a decentralized approach to achieve the benefits of network coding,
random linear network coding has aroused a lot of research interest as its implementation
does not require the knowledge of the topology of the network. However due to the randomness introduced, decoding failure in the implementation of the random linear network coding
becomes an important concern, as it leads to the inefficient usage of the network resources.
Moreover the encoding and decoding complexity of the random linear network coding has
also been a major issue in practice.
This thesis is dedicated to the analysis of the decoding failure probability in the application of the sparse random linear network coding, which maximizes the sparsity of the network
transfer matrix as an efficient way to reduce the implementation complexity. This problem
has close relationship with the rank distribution of the random matrices over finite fields. The
main contribution of this work is that we quantitatively show how the sparsity of the transfer matrix affects the decoding failure probability. Our asymptotic analysis shows that the
maximum achievable sparsity (the proportion of zeros in the matrix) can approach 1 as the
dimension of the matrix increases under the requirement that the decoding error probability vanishes. Based on the results of the rank distribution, we proposed an inner region for
the subspace channel capacity with a simple achievable scheme. Further more we show that
this scheme can very well approach the subspace channel capacity as the size of the channel
transfer matrix is large.
Then we consider the effect brought by injecting more packets into the network, and we
formulate it as a precoding problem. We show that under two different channel models for
sparse random linear network coding, i.e. sparse random matrix channel and random acyclic
network channel. The precoding methods that minimize decoding failure probability when the field size is large enough have been proposed. In both of these two channel models, we
quantitatively show that our precoding methods proposed are optimal with high probability.
Finally we present an application of the tools developed in our rank analysis in the classical
coding theory. We give a detailed characterization of the tradeoff among the minimum
distance, locality and the sparsity of the linear codes generated from a sparse random parity
check matrix over large finite fields.
Post a Comment