Hierarchical Lattice Vector Quantisation Of Wavelet Transformed Images (1999)
Lossless and nearly lossless digital video coding
In lossless coding, compresssion and decompression of source data result in the exact recovery of the individual elements of the original source data. Lossless image / video coding is necessary in applications where no loss of pixel values is tolerable. Examples are medical imaging, remote sensing, in image/video archives and studio applications where tandem- and trans-coding are used in editing, which can lead to accumulating errors. Nearly-lossless coding is used in applications where a small error, defined as a maximum error or as a root mean square (rms) error, is tolerable. In lossless embedded coding, a losslessly coded bit stream can be decoded at any bit rate lower than the lossless bit rate. In this thesis, research on embedded lossless video coding based on a motion compensated framework, similar to that of MPEG-2, is presented. Transforms that map integers into ...
Abhayaratne, Charith — University of Bath
ROBUST WATERMARKING TECHNIQUES FOR SCALABLE CODED IMAGE AND VIDEO
In scalable image/video coding, high resolution content is encoded to the highest visual quality and the bit-streams are adapted to cater various communication channels, display devices and usage requirements. These content adaptations, which include quality, resolution and frame rate scaling may also affect the content protection data, such as, watermarks and are considered as a potential watermark attack. In this thesis, research on robust watermarking techniques for scalable coded image and video, are proposed and the improvements in robustness against various content adaptation attacks, such as, JPEG 2000 for image and Motion JPEG 2000, MC-EZBC and H.264/SVC for video, are reported. The spread spectrum domain, particularly wavelet-based image watermarking schemes often provides better robustness to compression attacks due to its multi-resolution decomposition and hence chosen for this work. A comprehensive and comparative analysis of the available wavelet-based watermarking schemes,is performed ...
Bhowmik, Deepayan — University of Sheffield
Scalable Single and Multiple Description Scalar Quantization
Scalable representation of a source (e.g., image/video/3D mesh) enables decoding of the encoded bit-stream on a variety of end-user terminals with varying display, storage and processing capabilities. Furthermore, it allows for source communication via channels with different transmission bandwidths, as the source rate can be easily adapted to match the available channel bandwidth. From a different perspective, error-resilience against channel losses is also very important when transmitting scalable source streams over lossy transmission channels. Driven by the aforementioned requirements of scalable representation and error-resilience, this dissertation focuses on the analysis and design of scalable single and multiple description scalar quantizers. In the first part of this dissertation, we consider the design of scalable wavelet-based semi-regular 3D mesh compression systems. In this context, our design methodology thoroughly analyzes different modules of the mesh coding system in order to single-out appropriate design ...
Satti, Shahid Mahmood — Vrije Universiteit Brussel
Traditional and Scalable Coding Techniques for Video Compression
In recent years, the usage of digital video has steadily been increasing. Since the amount of data for uncompressed digital video representation is very high, lossy source coding techniques are usually employed in digital video systems to compress that information and make it more suitable for storage and transmission. The source coding algorithms for video compression can be grouped into two big classes: the traditional and the scalable techniques. The goal of the traditional video coders is to maximize the compression efficiency corresponding to a given amount of compressed data. The goal of scalable video coding is instead to give a scalable representation of the source, such that subsets of it are able to describe in an optimal way the same video source but with reduced resolution in the temporal, spatial and/or quality domain. This thesis is focused on the ...
Cappellari, Lorenzo — University of Padova
Dynamic Scheme Selection in Image Coding
This thesis deals with the coding of images with multiple coding schemes and their dynamic selection. In our society of information highways, electronic communication is taking everyday a bigger place in our lives. The number of transmitted images is also increasing everyday. Therefore, research on image compression is still an active area. However, the current trend is to add several functionalities to the compression scheme such as progressiveness for more comfortable browsing of web-sites or databases. Classical image coding schemes have a rigid structure. They usually process an image as a whole and treat the pixels as a simple signal with no particular characteristics. Second generation schemes use the concept of objects in an image, and introduce a model of the human visual system in the design of the coding scheme. Dynamic coding schemes, as their name tells us, make ...
Fleury, Pascal — Swiss Federal Institute of Technology
Nonlinear rate control techniques for constant bit rate MPEG video coders
Digital visual communication has been increasingly adopted as an efficient new medium in a variety of different fields; multi-media computers, digital televisions, telecommunications, etc. Exchange of visual information between remote sites requires that digital video is encoded by compressing the amount of data and transmitting it through specified network connections. The compression and transmission of digital video is an amalgamation of statistical data coding processes, which aims at efficient exchange of visual information without technical barriers due to different standards, services, media, etc. It is associated with a series of different disciplines of digital signal processing, each of which can be applied independently. It includes a few different technical principles; distortion, rate theory, prediction techniques and control theory. The MPEG (Moving Picture Experts Group) video compression standard is based on this paradigm, thus, it contains a variety of different coding ...
Saw, Yoo-Sok — University Of Edinburgh
Content Scalability in Multiple Description Image and Video Coding
High compression ratio, scalability and reliability are the main issues for transmitting multimedia content over best effort networks. Scalable image and video coding meets the user requirements by truncating the scalable bitstream at different quality, resolution and frame rate. However, the performance of scalable coding deteriorates rapidly over packet networks if the base layer packets are lost during transmission. Multiple description coding (MDC) has emerged as an effective source coding technique for robust image and video transmission over lossy networks. In this research problem of incorporating scalability in MDC for robust image and video transmission over best effort network is addressed. The first contribution of this thesis is to propose a strategy for generating more than two descriptions using multiple description scalar quantizer (MDSQ) with an objective to jointly decoded any number of descriptions in balanced and unbalanced manner. The ...
Majid, Muhammad — University of Sheffield
A flexible scalable video coding framework with adaptive spatio-temporal decompositions
The work presented in this thesis covers topics that extend the scalability functionalities in video coding and improve the compression performance. Two main novel approaches are presented, each targeting a different part of the scalable video coding (SVC) architecture: motion adaptive wavelet transform based on the wavelet transform in lifting implementation, and a design of a flexible framework for generalised spatio-temporal decomposition. Motion adaptive wavelet transform is based on the newly introduced concept of connectivity-map. The connectivity-map describes the underlying irregular structure of regularly sampled data. To enable a scalable representation of the connectivity-map, the corresponding analysis and synthesis operations have been derived. These are then employed to define a joint wavelet connectivity-map decomposition that serves as an adaptive alternative to the conventional wavelet decomposition. To demonstrate its applicability, the presented decomposition scheme is used in the proposed SVC framework, ...
Sprljan, Nikola — Queen Mary University of London
Advances in Perceptual Stereo Audio Coding Using Linear Prediction Techniques
A wide range of techniques for coding a single-channel speech and audio signal has been developed over the last few decades. In addition to pure redundancy reduction, sophisticated source and receiver models have been considered for reducing the bit-rate. Traditionally, speech and audio coders are based on different principles and thus each of them offers certain advantages. With the advent of high capacity channels, networks, and storage systems, the bit-rate versus quality compromise will no longer be the major issue; instead, attributes like low-delay, scalability, computational complexity, and error concealments in packet-oriented networks are expected to be the major selling factors. Typical audio coders such as MP3 and AAC are based on subband or transform coding techniques that are not easily reconcilable with a low-delay requirement. The reasons for their inherently longer delay are the relatively long band splitting filters ...
Biswas, Arijit — Technische Universiteit Eindhoven
Efficient Perceptual Audio Coding Using Cosine and Sine Modulated Lapped Transforms
The increasing number of simultaneous input and output channels utilized in immersive audio configurations primarily in broadcasting applications has renewed industrial requirements for efficient audio coding schemes with low bit-rate and complexity. This thesis presents a comprehensive review and extension of conventional approaches for perceptual coding of arbitrary multichannel audio signals. Particular emphasis is given to use cases ranging from two-channel stereophonic to six-channel 5.1-surround setups with or without the application-specific constraint of low algorithmic coding latency. Conventional perceptual audio codecs share six common algorithmic components, all of which are examined extensively in this thesis. The first is a signal-adaptive filterbank, constructed using instances of the real-valued modified discrete cosine transform (MDCT), to obtain spectral representations of successive portions of the incoming discrete time signal. Within this MDCT spectral domain, various intra- and inter-channel optimizations, most of which are of ...
Helmrich, Christian R. — Friedrich-Alexander-Universität Erlangen-Nürnberg
Multiple-description lattice vector quantization
In this thesis we construct and analyze index-assignment based multiple-description coding schemes.
Ostergaard, Jan — Delft University of Technology
In a communication system it results undoubtedly of great interest to compress the information generated by the data sources to its most elementary representation, so that the amount of power necessary for reliable communications can be reduced. It is often the case that the redundancy shown by a wide variety of information sources can be modelled by taking into account the probabilistic dependance among consecutive source symbols rather than the probabilistic distribution of a single symbol. These sources are commonly referred to as single or multiterminal sources "with memory" being the memory, in this latter case, the existing temporal correlation among the consecutive symbol vectors generated by the multiterminal source. It is well known that, when the source has memory, the average amount of information per source symbol is given by the entropy rate, which is lower than its entropy ...
Del Ser, Javier — University of Navarra (TECNUN)
WATERMARKING FOR 3D REPRESENTATIONS
In this thesis, a number of novel watermarking techniques for different 3D representations are presented. A novel watermarking method is proposed for the mono-view video, which might be interpreted as the basic implicit representation of 3D scenes. The proposed method solves the common flickering problem in the existing video watermarking schemes by means of adjusting the watermark strength with respect to temporal contrast thresholds of human visual system (HVS), which define the maximum invisible distortions in the temporal direction. The experimental results indicate that the proposed method gives better results in both objective and subjective measures, compared to some recognized methods in the literature. The watermarking techniques for the geometry and image based representations of 3D scenes, denoted as 3D watermarking, are examined and classified into three groups, as 3D-3D, 3D-2D and 2D-2D watermarking, in which the pair of symbols ...
Koz, Alper — Middle East Technical University, Department of Electrical and Electronics Engineering
Advanced equalization techniques for DMT-based systems
Digital subscriber line (DSL) technology is one of the fastest growing broadband internet access media. Whereas asymmetric DSL (ADSL) already offers data rates of a few megabits per second, next-generation ADSL2+ and VDSL promise even higher bit rates to support so-called triple play (high-quality video, voice and high-speed data). The use of a large bandwidth over the phone line (up to 12 MHz for VDSL) induces impairments, such as severe channel distortion, echo, narrow-band radiofrequency interference (RFI) and crosstalk from other DSL systems. DSL communication makes use of so-called discrete multitone (DMT) modulation, supplemented with advanced digital signal processing algorithms, to tackle these impairments and serve a maximum number of customers. In this thesis, we focus on channel equalization and RFI mitigation algorithms that outperform existing algorithms in terms of bit rate. DMT equalization is typically done by means of ...
Vanbleu, Koen — Katholieke Universiteit Leuven
On Ways to Improve Adaptive Filter Performance
Adaptive filtering techniques are used in a wide range of applications, including echo cancellation, adaptive equalization, adaptive noise cancellation, and adaptive beamforming. The performance of an adaptive filtering algorithm is evaluated based on its convergence rate, misadjustment, computational requirements, and numerical robustness. We attempt to improve the performance by developing new adaptation algorithms and by using "unconventional" structures for adaptive filters. Part I of this dissertation presents a new adaptation algorithm, which we have termed the Normalized LMS algorithm with Orthogonal Correction Factors (NLMS-OCF). The NLMS-OCF algorithm updates the adaptive filter coefficients (weights) on the basis of multiple input signal vectors, while NLMS updates the weights on the basis of a single input vector. The well-known Affine Projection Algorithm (APA) is a special case of our NLMS-OCF algorithm. We derive convergence and tracking properties of NLMS-OCF using a simple model ...
Sankaran, Sundar G. — Virginia Tech
The current layout is optimized for mobile phones. Page previews, thumbnails, and full abstracts will remain hidden until the browser window grows in width.
The current layout is optimized for tablet devices. Page previews and some thumbnails will remain hidden until the browser window grows in width.