Hierarchical Lattice Vector Quantisation Of Wavelet Transformed Images

The objectives of the research were to develop embedded and non-embedded lossy coding algorithms for images based on lattice vector quantisation and the discrete wavelet transform. We also wanted to develop context-based entropy coding methods (as opposed to simple first order entropy coding). The main objectives can therefore be summarised as follows: (1) To develop algorithms for intra and inter-band formed vectors (vectors with coefficients from the same sub-band or across different sub-bands) which compare favourably with current high performance wavelet based coders both in terms of rate/distortion performance of the decoded image and also subjective quality; (2) To develop new context-based coding methods (based on vector quantisation). The alternative algorithms we have developed fall into two categories: (a) Entropy coded and Binary uncoded successive approximation lattice vector quantisation (SALVQ- E and SA-LVQ-B) based on quantising vectors formed intra-band. This is an embedded coding algorithm where truncating the received bit-stream at any point can produce reconstructed images at a series of lower bit-rates; (b) Entropy coded pyramid vector quantisation (ECPVQ) algorithm based on forming vectors inter-band. This is a non-embedded coding algorithm where the bit-rate and therefore the distortion of the decoded image can be adjusted only at the encoder (by adjusting the resolution of the lattice quantiser). We have found some mixed results for the category (a) algorithms which we have developed. The binary uncoded version (no entropy coding) outperforms an equivalent scalar quantisation based method but the reverse is true for the respective entropy coded versions. This leads us to conclude that optimal scalar quantisation based embedded coding algorithms are likely to be superior to their vector quantisation based equivalents. This development lead us to concentrate on non-embedded lattice VQ algorithms (category (b)). We developed a new algorithm called ECPVQ which codes vectors formed in a hierarchical way (based on forming vectors by grouping coefficients from different scales). The main original contribution was in grouping large numbers of equiprobable lattice code-vectors into a few groups known as sub-classes/ super-classes on pyramidally shaped shells. This enabled efficient codes to be designed from training to entropy code the class indices. We found that the optimal quantiser, which we have called the Zn/Dn augmented lattice, in terms of maximising rate vs. PSNR performance, was in fact a combination of the well known Zn and Dn lattices. This actually involves using finer quantisation near the origin which is contrary to the approach adopted by many researchers using scalar quantisation based coders. We also developed an efficient context-based code for one of the three entropy codes which we needed to design for the ECPVQ algorithm. Our results, using a huffman and an arithmetic coder, show that ECPVQ is comparable in terms of rate vs. PSNR performance, particularly at low bit rates, with the very best current state of the art wavelet based coders [55] and certainly superior to all lattice quantisation based coders as far as the author is aware. More importantly, we found that the decoded images from ECPVQ tend to better preserve subtle texture detail than the state of the art coders with which we compared our results visually (methods of Said and Pearlman [50] and the context based scalar quantisation algorithm of Chrysafis and Ortega [52]). The reason for this appears to be because of the finer quantisation of low energy wavelet coefficients that occurs with the augmented lattice.

File Type: pdf
File Size: 2 MB
Publication Year: 1999
Author: Vij, Madhav
Supervisors: Nick Kingsbury
Institution: University of Cambridge, Department of Engineering, Signal Processing Group
Keywords: