With the widespread use of multimedia content, the multimedia security and digital rights management (DRM) issues have become increasingly important. A straightforward way to protect multimedia data is to use the traditional cryptographic algorithms such as AES and RC4 to encrypt the whole bit stream. Nevertheless, the encryption of multimedia files has to be carried out judiciously. On one hand, ciphering the complete compressed file may result in excessive computational burden and power consumption at the decoder and perhaps even the server/encoder/transcoder. While this may not be critical for PC-based systems, it could lead to excessive battery drain on a mobile device. Even more importantly, multimedia compressed files typically exhibit well-defined hierarchical structure (e.g. in the H.264, MPEG-4, H.263, MPEG-2, AVS, mp3, AAC, JPEG, JPEG2000 formats) that can be exploited in several useful ways, e.g., for scalability, transcoding, and rate shaping. However, these structures are not recognizable in the ciphertext, and hence, are wasted.
In this thesis, we address the problem of joint compression and encryption by introducing randomness into the entropy coder, including Huffman coder, arithmetic coder, and Lempel-Ziv-Welch coder. These entropy coders have been widely utilized in the state-of-the-art text/image/video coding standards, e.g., TIFF, JPEG and H. 264.
First, we study the error recovery of variable length codes (VLCs), which lays the theoretical foundation of many multimedia encryption schemes based on Huffman-like coding. More specifically, we extend the problem of precisely evaluating the error recovery capability of VLC to the binary symmetric channel (BSC) case. Two criteria to evaluate the error recovery performance of VLCs in this case are proposed, together with their explicit expressions. In addition, we establish a linkage between our proposed results with the previous results assuming single bit inversion error.
We also address an over twenty-year old conjecture in information theory society, which states that a certain class of codes have the optimal error recovery performance among all the codes having the same coding efficiency, provided that a constraint on the probability mass function is satisfied. We solve this conjecture by providing a rigorous proof. To the best of our knowledge, it is the first time that the optimality of VLCs in terms of error recovery performance can be proved. In addition, this proof could help us gain more insight into the working mechanism of the suffix condition widely used in many heuristic methods for finding the error-resilient codes.
Then, the practical multimedia encryption schemes using Huffman-like coding are investigated. We present a chosen-plaintext attack to break the Multiple Huffman Table (MHT) schemes. Based on the cryptanalysis results, we suggest an improved version of MHT by incorporating with a stream cipher. We also propose two criteria to select Huffman tables to achieve high level of security with small number of Huffman tables. These results could be readily extended to the Exp-Golomb coding, where the alphabet size is conceptually infinite.
We also investigate the lightweight multimedia encryption schemes using arithmetic coding, which is capable of offering higher coding efficiency compared with a Huffman coding. We give an adaptive chosen-ciphertext attack with linear complexity to break the recently proposed secure arithmetic coding, which is an advanced version of the interval splitting arithmetic coding. We then design a system that can resist all the existing attacks and can be conveniently incorporated with the context-based coding.
Finally, we address the problem of designing a secure joint compression-encryption scheme based on Lempel-Ziv-Welch coding, which is a universal source coding technique. We show that the proposed secure Lempel-Ziv-Welch algorithm with random dictionary permutation and insertion can provide high level of security without coding efficiency loss.
Post a Comment