Content Scalability in Multiple Description Image and Video Coding

High compression ratio, scalability and reliability are the main issues for transmitting multimedia content over best effort networks. Scalable image and video coding meets the user requirements by truncating the scalable bitstream at different quality, resolution and frame rate. However, the performance of scalable coding deteriorates rapidly over packet networks if the base layer packets are lost during transmission. Multiple description coding (MDC) has emerged as an effective source coding technique for robust image and video transmission over lossy networks. In this research problem of incorporating scalability in MDC for robust image and video transmission over best effort network is addressed. The first contribution of this thesis is to propose a strategy for generating more than two descriptions using multiple description scalar quantizer (MDSQ) with an objective to jointly decoded any number of descriptions in balanced and unbalanced manner. The ...

Majid, Muhammad — University of Sheffield


Multiple Description Coding for Path Diversity Video Streaming

In the current heterogeneous communication environments, the great variety of multimedia systems and applications combined with fast evolution of networking architectures and topologies, give rise to new research problems related to the various elements of the communication chain. This includes, the ever present problem in video communications, which results from the need for coping with transmission errors and losses. In this context, video streaming with path diversity appeared as a novel communication framework, involving different technological fields and posing several research challenges. The research work carried out in this thesis is a contribution to robust video coding and adaptation techniques in the field of Multiple Description Coding (MDC) for multipath video streaming. The thesis starts with a thorough study of MDC and its theoretical basis followed by a description of the most important practical implementation aspects currently available in literature. ...

Correia, Pedro Daniel Frazão — University of Coimbra


Advances in Perceptual Stereo Audio Coding Using Linear Prediction Techniques

A wide range of techniques for coding a single-channel speech and audio signal has been developed over the last few decades. In addition to pure redundancy reduction, sophisticated source and receiver models have been considered for reducing the bit-rate. Traditionally, speech and audio coders are based on different principles and thus each of them offers certain advantages. With the advent of high capacity channels, networks, and storage systems, the bit-rate versus quality compromise will no longer be the major issue; instead, attributes like low-delay, scalability, computational complexity, and error concealments in packet-oriented networks are expected to be the major selling factors. Typical audio coders such as MP3 and AAC are based on subband or transform coding techniques that are not easily reconcilable with a low-delay requirement. The reasons for their inherently longer delay are the relatively long band splitting filters ...

Biswas, Arijit — Technische Universiteit Eindhoven


Contributions to Improved Hard- and Soft-Decision Decoding in Speech and Audio Codecs

Source coding is an essential part in digital communications. In error-prone transmission conditions, even with the help of channel coding, which normally introduces delay, bit errors may still occur. Single bit errors can result in significant distortions. Therefore, a robust source decoder is desired for adverse transmission conditions. Compared to the traditional hard-decision (HD) decoding and error concealment, soft-decision (SD) decoding offers a higher robustness by exploiting the source residual redundancy and utilizing the bit-wise channel reliability information. Moreover, the quantization codebook index can be either mapped to a fixed number of bits using fixed-length (FL) codes, or a variable number of bits employing variable-length (VL) codes. The codebook entry can be either fixed over time or time-variant. However, using a fixed scalar quantization codebook leads to the same performance for correlated and uncorrelated processes. This thesis aims to improve ...

Han, Sai — Technische Universität Braunschweig


Error Resilience and Concealment Techniques for High Efficiency Video Coding

This thesis investigates the problem of robust coding and error concealment in High Efficiency Video Coding (HEVC). After a review of the current state of the art, a simulation study about error robustness, revealed that the HEVC has weak protection against network losses with significant impact on video quality degradation. Based on this evidence, the first contribution of this work is a new method to reduce the temporal dependencies between motion vectors, by improving the decoded video quality without compromising the compression efficiency. The second contribution of this thesis is a two-stage approach for reducing the mismatch of temporal predictions in case of video streams received with errors or lost data. At the encoding stage, the reference pictures are dynamically distributed based on a constrained Lagrangian rate-distortion optimization to reduce the number of predictions from a single reference. At the ...

João Filipe Monteiro Carreira — Loughborough University London


Iterative Joint Source-Channel Coding Techniques for Single and Multiterminal Sources in Communication Networks

In a communication system it results undoubtedly of great interest to compress the information generated by the data sources to its most elementary representation, so that the amount of power necessary for reliable communications can be reduced. It is often the case that the redundancy shown by a wide variety of information sources can be modelled by taking into account the probabilistic dependance among consecutive source symbols rather than the probabilistic distribution of a single symbol. These sources are commonly referred to as single or multiterminal sources "with memory" being the memory, in this latter case, the existing temporal correlation among the consecutive symbol vectors generated by the multiterminal source. It is well known that, when the source has memory, the average amount of information per source symbol is given by the entropy rate, which is lower than its entropy ...

Del Ser, Javier — University of Navarra (TECNUN)


Robust and multiresolution video delivery : From H.26x to Matching pursuit based technologies

With the joint development of networking and digital coding technologies multimedia and more particularly video services are clearly becoming one of the major consumers of the new information networks. The rapid growth of the Internet and computer industry however results in a very heterogeneous infrastructure commonly overloaded. Video service providers have nevertheless to oer to their clients the best possible quality according to their respective capabilities and communication channel status. The Quality of Service is not only inuenced by the compression artifacts, but also by unavoidable packet losses. Hence, the packet video stream has clearly to fulll possibly contradictory requirements, that are coding eciency and robustness to data loss. The rst contribution of this thesis is the complete modeling of the video Quality of Service (QoS) in standard and more particularly MPEG-2 applications. The performance of Forward Error Control (FEC) ...

Frossard, Pascal — Swiss Federal Institute of Technology


Traditional and Scalable Coding Techniques for Video Compression

In recent years, the usage of digital video has steadily been increasing. Since the amount of data for uncompressed digital video representation is very high, lossy source coding techniques are usually employed in digital video systems to compress that information and make it more suitable for storage and transmission. The source coding algorithms for video compression can be grouped into two big classes: the traditional and the scalable techniques. The goal of the traditional video coders is to maximize the compression efficiency corresponding to a given amount of compressed data. The goal of scalable video coding is instead to give a scalable representation of the source, such that subsets of it are able to describe in an optimal way the same video source but with reduced resolution in the temporal, spatial and/or quality domain. This thesis is focused on the ...

Cappellari, Lorenzo — University of Padova


Distributed Video Coding for Wireless Lightweight Multimedia Applications

In the modern wireless age, lightweight multimedia technology stimulates attractive commercial applications on a grand scale as well as highly specialized niche markets. In this regard, the design of efficient video compression systems meeting such key requirements as very low encoding complexity, transmission error robustness and scalability, is no straightforward task. The answer can be found in fundamental information theoretic results, according to which efficient compression can be achieved by leveraging knowledge of the source statistics at the decoder only, giving rise to distributed, or alias Wyner-Ziv, video coding. This dissertation engineers efficient lightweight Wyner-Ziv video coding schemes emphasizing on several design aspects and applications. The first contribution of this dissertation focuses on the design of effective side information generation techniques so as to boost the compression capabilities of Wyner-Ziv video coding systems. To this end, overlapped block motion estimation ...

Deligiannis, Nikos — Vrije Universiteit Brussel


Stereoscopic depth map estimation and coding techniques for multiview video systems

The dissertation deals with the problems of stereoscopic depth estimation and coding in multiview video systems, which are vital for development of the next generation three-dimensional television. The depth estimation algorithms known from literature, along with theoretical foundations are discussed. The problem of estimation of depth maps with high quality, expressed by means of accuracy, precision and temporal consistency, has been stated. Next, original solutions have been proposed. Author has proposed a novel, theoretically founded approach to depth estimation which employs Maximum A posteriori Probability (MAP) rule for modeling of the cost function used in optimization algorithms. The proposal has been presented along with a method for estimation of parameters of such model. In order to attain that, an analysis of the noise existing in multiview video and a study of inter-view correlation of corresponding samples of pictures have been ...

Stankiewicz, Olgierd — Poznan University of Technology


MPEGII Video Coding For Noisy Channels

This thesis considers the performance of MPEG-II compressed video when transmitted over noisy channels, a subject of relevance to digital terrestrial television, video communication and mobile digital video. Results of bit sensitivity and resynchronisation sensitivity measurements are presented and techniques proposed for substantially improving the resilience of MPEG-II to transmission errors without the addition of any extra redundancy into the bitstream. It is errors in variable length encoded data which are found to cause the greatest artifacts as errors in these data can cause loss of bitstream synchronisation. The concept of a ‘black box transcoder’ is developed where MPEG-II is losslessly transcoded into a different structure for transmission. Bitstream resynchronisation is achieved using a technique known as error-resilient entropy coding (EREC). The error-resilience of differentially coded information is then improved by replacing the standard 1D-DPCM with a more resilient hierarchical ...

Swan, Robert — University of Cambridge


Optimization of Coding of AR Sources for Transmission Across Channels with Loss

Source coding concerns the representation of information in a source signal using as few bits as possible. In the case of lossy source coding, it is the encoding of a source signal using the fewest possible bits at a given distortion or, at the lowest possible distortion given a specified bit rate. Channel coding is usually applied in combination with source coding to ensure reliable transmission of the (source coded) information at the maximal rate across a channel given the properties of this channel. In this thesis, we consider the coding of auto-regressive (AR) sources which are sources that can be modeled as auto-regressive processes. The coding of AR sources lends itself to linear predictive coding. We address the problem of joint source/channel coding in the setting of linear predictive coding of AR sources. We consider channels in which individual ...

Arildsen, Thomas — Aalborg University


ROBUST WATERMARKING TECHNIQUES FOR SCALABLE CODED IMAGE AND VIDEO

In scalable image/video coding, high resolution content is encoded to the highest visual quality and the bit-streams are adapted to cater various communication channels, display devices and usage requirements. These content adaptations, which include quality, resolution and frame rate scaling may also affect the content protection data, such as, watermarks and are considered as a potential watermark attack. In this thesis, research on robust watermarking techniques for scalable coded image and video, are proposed and the improvements in robustness against various content adaptation attacks, such as, JPEG 2000 for image and Motion JPEG 2000, MC-EZBC and H.264/SVC for video, are reported. The spread spectrum domain, particularly wavelet-based image watermarking schemes often provides better robustness to compression attacks due to its multi-resolution decomposition and hence chosen for this work. A comprehensive and comparative analysis of the available wavelet-based watermarking schemes,is performed ...

Bhowmik, Deepayan — University of Sheffield


Limited Feedback Transceiver Design for Downlink MIMO OFDM Cellular Networks

Feedback in wireless communications is tied to a long-standing and successful history, facilitating robust and spectrally efficient transmission over the uncertain wireless medium. Since the application of multiple antennas at both ends of the communication link, enabling multiple-input multiple-output (MIMO) transmission, the importance of feedback information to achieve the highest performance is even more pronounced. Especially when multiple antennas are employed by the transmitter to handle the interference between multiple users, channel state information (CSI) is a fundamental prerequisite. The corresponding multi-user MIMO, interference alignment and coordination techniques are considered as a central part of future cellular networks to cope with the growing inter-cell-interference, caused by the unavoidable densification of base stations to support the exponentially increasing demand on network capacities. However, this vision can only be implemented with efficient feedback algorithms that provide accurate CSI at the transmitter without ...

Schwarz, Stefan — Vienna University of Technology


On-board Processing for an Infrared Observatory

During the past two decades, image compression has developed from a mostly academic Rate-Distortion (R-D) field, into a highly commercial business. Various lossless and lossy image coding techniques have been developed. This thesis represents an interdisciplinary work between the field of astronomy and digital image processing and brings new aspects into both of the fields. In fact, image compression had its beginning in an American space program for efficient data storage. The goal of this research work is to recognize and develop new methods for space observatories and software tools to incorporate compression in space astronomy standards. While the astronomers benefit from new objective processing and analysis methods and improved efficiency and quality, for technicians a new field of application and research is opened. For validation of the processing results, the case of InfraRed (IR) astronomy has been specifically analyzed. ...

Belbachir, Ahmed Nabil — Vienna University of Technology

The current layout is optimized for mobile phones. Page previews, thumbnails, and full abstracts will remain hidden until the browser window grows in width.

The current layout is optimized for tablet devices. Page previews and some thumbnails will remain hidden until the browser window grows in width.