Nonlinear rate control techniques for constant bit rate MPEG video coders (1997)
MPEGII Video Coding For Noisy Channels
This thesis considers the performance of MPEG-II compressed video when transmitted over noisy channels, a subject of relevance to digital terrestrial television, video communication and mobile digital video. Results of bit sensitivity and resynchronisation sensitivity measurements are presented and techniques proposed for substantially improving the resilience of MPEG-II to transmission errors without the addition of any extra redundancy into the bitstream. It is errors in variable length encoded data which are found to cause the greatest artifacts as errors in these data can cause loss of bitstream synchronisation. The concept of a ‘black box transcoder’ is developed where MPEG-II is losslessly transcoded into a different structure for transmission. Bitstream resynchronisation is achieved using a technique known as error-resilient entropy coding (EREC). The error-resilience of differentially coded information is then improved by replacing the standard 1D-DPCM with a more resilient hierarchical ...
Swan, Robert — University of Cambridge
Exploiting Correlation Noise Modeling in Wyner-Ziv Video Coding
Wyner-Ziv (WZ) video coding is a particular case of distributed video coding, a new video coding paradigm based on the Slepian-Wolf and Wyner-Ziv theorems which mainly exploit the source correlation at the decoder and not only at the encoder as in predictive video coding. Therefore, this new coding paradigm may provide a flexible allocation of complexity between the encoder and the decoder and in-built channel error robustness, interesting features for emerging applications such as low-power video surveillance and visual sensor networks among others. Although some progress has been made in the last eight years, the rate-distortion performance of WZ video coding is still far from the maximum performance attained with predictive video coding. The WZ video coding compression efficiency depends critically on the capability to model the correlation noise between the original information at the encoder and its estimation generated ...
Brites, Catarina — Instituto Superior Tecnico (IST)
Lossless and nearly lossless digital video coding
In lossless coding, compresssion and decompression of source data result in the exact recovery of the individual elements of the original source data. Lossless image / video coding is necessary in applications where no loss of pixel values is tolerable. Examples are medical imaging, remote sensing, in image/video archives and studio applications where tandem- and trans-coding are used in editing, which can lead to accumulating errors. Nearly-lossless coding is used in applications where a small error, defined as a maximum error or as a root mean square (rms) error, is tolerable. In lossless embedded coding, a losslessly coded bit stream can be decoded at any bit rate lower than the lossless bit rate. In this thesis, research on embedded lossless video coding based on a motion compensated framework, similar to that of MPEG-2, is presented. Transforms that map integers into ...
Abhayaratne, Charith — University of Bath
Novel Methods in H.264/AVC (Inter Prediction, Data Hiding, Bit Rate Transcoding)
H.264 Advanced Video Coding has become the dominant video coding standard in the market, within a few years after the first version of the standard was completed by the ISO/IEC MPEG and the ITU-T VCEG groups in May 2003. That happened mainly due to the great coding efficiency of H.264. Compared to MPEG-2, the previous dominant standard, the H.264 compression ratio is about twice as higher for the same video quality. That makes H.264 ideal for a numerous of applications, such as video broadcasting, video streaming and video conferencing. However, the H.264 efficiency is achieved at the expense of the codec¢s complexity. H.264 complexity is about four times that of MPEG-2. As a consequence, many video coding issues, which have been addressed in previous standards, need to be re-considered. For example the H.264 encoding of a video in real time ...
Kapotas, Spyridon — Hellenic Open University
Stereoscopic depth map estimation and coding techniques for multiview video systems
The dissertation deals with the problems of stereoscopic depth estimation and coding in multiview video systems, which are vital for development of the next generation three-dimensional television. The depth estimation algorithms known from literature, along with theoretical foundations are discussed. The problem of estimation of depth maps with high quality, expressed by means of accuracy, precision and temporal consistency, has been stated. Next, original solutions have been proposed. Author has proposed a novel, theoretically founded approach to depth estimation which employs Maximum A posteriori Probability (MAP) rule for modeling of the cost function used in optimization algorithms. The proposal has been presented along with a method for estimation of parameters of such model. In order to attain that, an analysis of the noise existing in multiview video and a study of inter-view correlation of corresponding samples of pictures have been ...
Stankiewicz, Olgierd — Poznan University of Technology
Hierarchical Lattice Vector Quantisation Of Wavelet Transformed Images
The objectives of the research were to develop embedded and non-embedded lossy coding algorithms for images based on lattice vector quantisation and the discrete wavelet transform. We also wanted to develop context-based entropy coding methods (as opposed to simple first order entropy coding). The main objectives can therefore be summarised as follows: (1) To develop algorithms for intra and inter-band formed vectors (vectors with coefficients from the same sub-band or across different sub-bands) which compare favourably with current high performance wavelet based coders both in terms of rate/distortion performance of the decoded image and also subjective quality; (2) To develop new context-based coding methods (based on vector quantisation). The alternative algorithms we have developed fall into two categories: (a) Entropy coded and Binary uncoded successive approximation lattice vector quantisation (SALVQ- E and SA-LVQ-B) based on quantising vectors formed intra-band. This ...
Vij, Madhav — University of Cambridge, Department of Engineering, Signal Processing Group
Efficient representation, generation and compression of digital holograms
Digital holography is a discipline of science that measures or reconstructs the wavefield of light by means of interference. The wavefield encodes three-dimensional information, which has many applications, such as interferometry, microscopy, non-destructive testing and data storage. Moreover, digital holography is emerging as a display technology. Holograms can recreate the wavefield of a 3D object, thereby reproducing all depth cues for all viewpoints, unlike current stereoscopic 3D displays. At high quality, the appearance of an object on a holographic display system becomes indistinguishable from a real one. High-quality holograms need large volumes of data to be represented, approaching resolutions of billions of pixels. For holographic videos, the data rates needed for transmitting and encoding of the raw holograms quickly become unfeasible with currently available hardware. Efficient generation and coding of holograms will be of utmost importance for future holographic displays. ...
Blinder, David — Vrije Universiteit Brussel
Advances in Perceptual Stereo Audio Coding Using Linear Prediction Techniques
A wide range of techniques for coding a single-channel speech and audio signal has been developed over the last few decades. In addition to pure redundancy reduction, sophisticated source and receiver models have been considered for reducing the bit-rate. Traditionally, speech and audio coders are based on different principles and thus each of them offers certain advantages. With the advent of high capacity channels, networks, and storage systems, the bit-rate versus quality compromise will no longer be the major issue; instead, attributes like low-delay, scalability, computational complexity, and error concealments in packet-oriented networks are expected to be the major selling factors. Typical audio coders such as MP3 and AAC are based on subband or transform coding techniques that are not easily reconcilable with a low-delay requirement. The reasons for their inherently longer delay are the relatively long band splitting filters ...
Biswas, Arijit — Technische Universiteit Eindhoven
ROBUST WATERMARKING TECHNIQUES FOR SCALABLE CODED IMAGE AND VIDEO
In scalable image/video coding, high resolution content is encoded to the highest visual quality and the bit-streams are adapted to cater various communication channels, display devices and usage requirements. These content adaptations, which include quality, resolution and frame rate scaling may also affect the content protection data, such as, watermarks and are considered as a potential watermark attack. In this thesis, research on robust watermarking techniques for scalable coded image and video, are proposed and the improvements in robustness against various content adaptation attacks, such as, JPEG 2000 for image and Motion JPEG 2000, MC-EZBC and H.264/SVC for video, are reported. The spread spectrum domain, particularly wavelet-based image watermarking schemes often provides better robustness to compression attacks due to its multi-resolution decomposition and hence chosen for this work. A comprehensive and comparative analysis of the available wavelet-based watermarking schemes,is performed ...
Bhowmik, Deepayan — University of Sheffield
Noise or interference is often assumed to be a random process. Conventional linear filtering, control or prediction techniques are used to cancel or reduce the noise. However, some noise processes have been shown to be nonlinear and deterministic. These nonlinear deterministic noise processes appear to be random when analysed with second order statistics. As nonlinear processes are widespread in nature it may be beneficial to exploit the coherence of the nonlinear deterministic noise with nonlinear filtering techniques. The nonlinear deterministic noise processes used in this thesis are generated from nonlinear difference or differential equations which are derived from real world scenarios. Analysis tools from the theory of nonlinear dynamics are used to determine an appropriate sampling rate of the nonlinear deterministic noise processes and their embedding dimensions. Nonlinear models, such as the Volterra series filter and the radial basis function ...
Strauch, Paul E. — University Of Edinburgh
Robust and multiresolution video delivery : From H.26x to Matching pursuit based technologies
With the joint development of networking and digital coding technologies multimedia and more particularly video services are clearly becoming one of the major consumers of the new information networks. The rapid growth of the Internet and computer industry however results in a very heterogeneous infrastructure commonly overloaded. Video service providers have nevertheless to oer to their clients the best possible quality according to their respective capabilities and communication channel status. The Quality of Service is not only inuenced by the compression artifacts, but also by unavoidable packet losses. Hence, the packet video stream has clearly to fulll possibly contradictory requirements, that are coding eciency and robustness to data loss. The rst contribution of this thesis is the complete modeling of the video Quality of Service (QoS) in standard and more particularly MPEG-2 applications. The performance of Forward Error Control (FEC) ...
Frossard, Pascal — Swiss Federal Institute of Technology
Error Resilience and Concealment Techniques for High Efficiency Video Coding
This thesis investigates the problem of robust coding and error concealment in High Efficiency Video Coding (HEVC). After a review of the current state of the art, a simulation study about error robustness, revealed that the HEVC has weak protection against network losses with significant impact on video quality degradation. Based on this evidence, the first contribution of this work is a new method to reduce the temporal dependencies between motion vectors, by improving the decoded video quality without compromising the compression efficiency. The second contribution of this thesis is a two-stage approach for reducing the mismatch of temporal predictions in case of video streams received with errors or lost data. At the encoding stage, the reference pictures are dynamically distributed based on a constrained Lagrangian rate-distortion optimization to reduce the number of predictions from a single reference. At the ...
João Filipe Monteiro Carreira — Loughborough University London
Digital Forensic Techniques for Splicing Detection in Multimedia Contents
Visual and audio contents always played a key role in communications, because of their immediacy and presumed objectivity. This has become even more true in the digital era, and today it is common to have multimedia contents stand as proof of events. Digital contents, however, are also very easy to manipulate, thus calling for analysis methods devoted to uncover their processing history. Multimedia forensics is the science trying to answer questions about the past of a given image, audio or video file, questions like “which was the recording device?", or “is the content authentic?". In particular, authenticity assessment is a crucial task in many contexts, and it usually consists in determining whether the investigated object has been artificially created by splicing together different contents. In this thesis we address the problem of splicing detection in the three main media: image, ...
Fontani, Marco — Dept. of Information Engineering and Mathematics, University of Siena
Dynamic Scheme Selection in Image Coding
This thesis deals with the coding of images with multiple coding schemes and their dynamic selection. In our society of information highways, electronic communication is taking everyday a bigger place in our lives. The number of transmitted images is also increasing everyday. Therefore, research on image compression is still an active area. However, the current trend is to add several functionalities to the compression scheme such as progressiveness for more comfortable browsing of web-sites or databases. Classical image coding schemes have a rigid structure. They usually process an image as a whole and treat the pixels as a simple signal with no particular characteristics. Second generation schemes use the concept of objects in an image, and introduce a model of the human visual system in the design of the coding scheme. Dynamic coding schemes, as their name tells us, make ...
Fleury, Pascal — Swiss Federal Institute of Technology
Geometric Distortion in Image and Video Watermarking. Robustness and Perceptual Quality Impact
The main focus of this thesis is the problem of geometric distortion in image and video watermarking. In this thesis we discuss the two aspects of the geometric distortion problem, namely the watermark desynchronization aspect and the perceptual quality assessment aspect. Furthermore, this thesis also discusses the challenges of watermarking data compressed in low bit-rates. The main contributions of this thesis are: A watermarking algorithm suitable for low bit-rate video has been proposed. Two different approaches has been proposed to deal with the watermark desynchronization problem. A novel approach has been proposed to quantify the perceptual quality impact of geometric distortion.
Setyawan, Iwan — Delft University of Technology
The current layout is optimized for mobile phones. Page previews, thumbnails, and full abstracts will remain hidden until the browser window grows in width.
The current layout is optimized for tablet devices. Page previews and some thumbnails will remain hidden until the browser window grows in width.