Complexity related aspects of image compression

Digital signal processing (DSP and, in particular, image processing, has been studied for many years. However, only the recent advances in computing technology have made it possible to use DSP in day-to-day applications. Images are now commonly used in many applications. The increasingly ubiquitous use of images raises new challenges. Users expect the images to be transmitted in a minimum of time and to take up as little storage space as possible. These requirements call for efficient image compression algorithms. The users want this compression and decompression process to be very fast so as not to have to wait for an image to be usable. Therefore, the complexities of compression algorithms need to be studied. In this thesis the term complexity is be linked to the execution time of an algorithm. That is, the lower the complexity of an algorithm, the faster it is. The complexity of an algorithm can be analyzsed in two ways. One way involves only an intuitive understanding of the complexity. Such techniques are classied as qualitative analysis techniques. With qualitative analysis it is not possible to measure or quantify the gain when the algorithm is optimized. The other approach to complexity analysis involves building a model of the complexity. This allows objective comparison of dierent algorithms. This thesis addresses both kinds of complexity analysis techniques. In the first part of the thesis, image compression algorithms are optimized based on qualitative analysis. In the second part, a methodology to measure the complexity is presented (and used). The optimizations are focussed on the Discrete Wavelet Transform (DWT) and on one particularly efficient implementation of the DWT the lifting scheme (LS). This thesis demonstrates that the LS can lead to a four-fold gain in terms of memory operations and that efficient implementations can minimize the memory bandwidth required. A row-based algorithm for computing the DWT is also presented. This algorithm requires only a fraction of the memory used by conventional algorithm. The LS can also be used to compute the Integer Wavelet Transform (IWT). This non-linear transform is analyzed using signal-processing techniques. The expected compression performance degradations caused by the use of the IWT in place of the DWT are theoretically predicted. This allows us to understand all the consequences of the optimization through the use of the IWT. A new measure of the complexity of signal processing algorithms is presented in the second part of the thesis. The proposed measure takes into account arithmetic operations, tests (or branches) and memory operations. The measure works in two steps, one depending on the algorithm, and the other, on the architecture on which the algorithm is implemented. The complexity of the algorithm is then expressed by a weighted sum of the algorithm-dependent counters, where the weights are determined by the architecture-dependent step. However, even with this well-dened methodology, analysis of the complexity analysis is still a long and dicult process. One way to simplify the problem is to use the fact that most algorithms can be divided into a succession of small tasks (or blocks). This is especially true of image compression algorithm. Therefore, the complexities of the most common processing blocks for image compression are studied separately. The complexity analysis of a new algorithm then becomes the sum of the complexity of each one of its building blocks. The complexities of four well known compression schemes are analyzed using the proposed complexity measure. The first algorithm studied is based on vector quantization, the second one is the baseline codec of the JPEG standard, and the remaining two are wavelet-based codecs (SPIHT and LZC). In each case the complexity prediction is veried using the measured execution time on a Pentium processor. The complexities of all codecs depend on the compression ratio. Therefore, a rate-complexity curve is designed for each encoder and decoder. This is a logical complement to the rate-distortion curve, which is normally presented with each algorithm. Compression algorithms can now be compared using both, their rate-distortion and their rate-complexity performances.

File Type: pdf
File Size: 5 KB
Publication Year: 2001
Author: Reichel, Julien
Supervisors: Murat Kunt
Institution: Swiss Federal Institute of Technology
Keywords: