Watermark-based error concealment algorithms for low bit rate video communications

In this work, a novel set of robust watermark-based error concealment (WEC) algorithms are proposed. Watermarking is used to introduce redundancy to the transmitted data with little or no increase in its bit rate during transmission. The proposed algorithms involve generating a low resolution version of a video frame and seamlessly embedding it as a watermark in the frame itself during encoding. At the receiver, the watermark is extracted from the reconstructed frame and the lost information is recovered using the extracted watermark signal, thus enhancing its perceptual quality. Three DCT-based spread spectrum watermark embedding techniques are presented in this work. The first technique uses a multiplicative Gaussian pseudo-noise with a pre-defined spreading gain and fixed chip rate. The second one is its adaptively scaled version and the third technique uses informed watermarking. Two versions of the low resolution reference, ...

Adsumilli, Chowdary — University of California, Santa Barbara


Geometric Distortion in Image and Video Watermarking. Robustness and Perceptual Quality Impact

The main focus of this thesis is the problem of geometric distortion in image and video watermarking. In this thesis we discuss the two aspects of the geometric distortion problem, namely the watermark desynchronization aspect and the perceptual quality assessment aspect. Furthermore, this thesis also discusses the challenges of watermarking data compressed in low bit-rates. The main contributions of this thesis are: A watermarking algorithm suitable for low bit-rate video has been proposed. Two different approaches has been proposed to deal with the watermark desynchronization problem. A novel approach has been proposed to quantify the perceptual quality impact of geometric distortion.

Setyawan, Iwan — Delft University of Technology


ROBUST WATERMARKING TECHNIQUES FOR SCALABLE CODED IMAGE AND VIDEO

In scalable image/video coding, high resolution content is encoded to the highest visual quality and the bit-streams are adapted to cater various communication channels, display devices and usage requirements. These content adaptations, which include quality, resolution and frame rate scaling may also affect the content protection data, such as, watermarks and are considered as a potential watermark attack. In this thesis, research on robust watermarking techniques for scalable coded image and video, are proposed and the improvements in robustness against various content adaptation attacks, such as, JPEG 2000 for image and Motion JPEG 2000, MC-EZBC and H.264/SVC for video, are reported. The spread spectrum domain, particularly wavelet-based image watermarking schemes often provides better robustness to compression attacks due to its multi-resolution decomposition and hence chosen for this work. A comprehensive and comparative analysis of the available wavelet-based watermarking schemes,is performed ...

Bhowmik, Deepayan — University of Sheffield


Vision models and quality metrics for image processing applications

Optimizing the performance of digital imaging systems with respect to the capture, display, storage and transmission of visual information represents one of the biggest challenges in the field of image and video processing. Taking into account the way humans perceive visual information can be greatly beneficial for this task. To achieve this, it is necessary to understand and model the human visual system, which is also the principal goal of this thesis. Computational models for different aspects of the visual system are developed, which can be used in a wide variety of image and video processing applications. The proposed models and metrics are shown to be consistent with human perception. The focus of this work is visual quality assessment. A perceptual distortion metric (PDM) for the evaluation of video quality is presented. It is based on a model of the ...

Winkler, Stefan — Swiss Federal Institute of Technology


Active and Passive Approaches for Image Authentication

The generation and manipulation of digital images is made simple by widely available digital cameras and image processing software. As a consequence, we can no longer take the authenticity of a digital image for granted. This thesis investigates the problem of protecting the trustworthiness of digital images. Image authentication aims to verify the authenticity of a digital image. General solution of image authentication is based on digital signature or watermarking. A lot of studies have been conducted for image authentication, but thus far there has been no solution that could be robust enough to transmission errors during images transmission over lossy channels. On the other hand, digital image forensics is an emerging topic for passively assessing image authenticity, which works in the absence of any digital watermark or signature. This thesis focuses on how to assess the authenticity images when ...

Ye, Shuiming — National University of Singapore


Image Quality Statistics and their use in Steganalysis and Compression

We categorize comprehensively image quality measures, extend measures defined for gray scale images to their multispectral case, and propose novel image quality measures. The statistical behavior of the measures and their sensitivity to various kinds of distortions, data hiding and coding artifacts are investigated via Analysis of Variance techniques. Their similarities or differences have been illustrated by plotting their Kohonen maps. Measures that give consistent scores across an image class and that are sensitive to distortions and coding artifacts are pointed out. We present techniques for steganalysis of images that have been potentially subjected to watermarking or steganographic algorithms. Our hypothesis is that watermarking and steganographic schemes leave statistical evidence that can be exploited for detection with the aid of image quality features and multivariate regression analysis. The steganalyzer is built using multivariate regression on the selected quality metrics. In ...

Avcibas, Ismail — Bogazici University


Modeling Perceived Quality for Imaging Applications

People of all generations are making more and more use of digital imaging systems in their daily lives. The image content rendered by these digital imaging systems largely differs in perceived quality depending on the system and its applications. To be able to optimize the experience of viewers of this content understanding and modeling perceived image quality is essential. Research on modeling image quality in a full-reference framework --- where the original content can be used as a reference --- is well established in literature. In many current applications, however, the perceived image quality needs to be modeled in a no-reference framework at real-time. As a consequence, the model needs to quantitatively predict perceived quality of a degraded image without being able to compare it to its original version, and has to achieve this with limited computational complexity in order ...

Liu, Hantao — Delft University of Technology


WATERMARKING FOR 3D REPRESENTATIONS

In this thesis, a number of novel watermarking techniques for different 3D representations are presented. A novel watermarking method is proposed for the mono-view video, which might be interpreted as the basic implicit representation of 3D scenes. The proposed method solves the common flickering problem in the existing video watermarking schemes by means of adjusting the watermark strength with respect to temporal contrast thresholds of human visual system (HVS), which define the maximum invisible distortions in the temporal direction. The experimental results indicate that the proposed method gives better results in both objective and subjective measures, compared to some recognized methods in the literature. The watermarking techniques for the geometry and image based representations of 3D scenes, denoted as 3D watermarking, are examined and classified into three groups, as 3D-3D, 3D-2D and 2D-2D watermarking, in which the pair of symbols ...

Koz, Alper — Middle East Technical University, Department of Electrical and Electronics Engineering


Techniques for improving the performance of distributed video coding

Distributed Video Coding (DVC) is a recently proposed paradigm in video communication, which fits well emerging applications such as wireless video surveillance, multimedia sensor networks, wireless PC cameras, and mobile cameras phones. These applications require a low complexity encoding, while possibly affording a high complexity decoding. DVC presents several advantages: First, the complexity can be distributed between the encoder and the decoder. Second, the DVC is robust to errors, since it uses a channel code. In DVC, a Side Information (SI) is estimated at the decoder, using the available decoded frames, and used for the decoding and reconstruction of other frames. In this Ph.D thesis, we propose new techniques in order to improve the quality of the SI. First, successive refinement of the SI is performed after each decoded DCT band, using a Partially Decoded WZF (PDWZF), along with the ...

Abou-Elailah, Abdalbassir — Telecom Paristech


Video Quality Estimation for Mobile Video Streaming

For the provisioning of video streaming services it is essential to provide a required level of customer satisfaction, given by the perceived video stream quality. It is therefore important to choose the compression parameters as well as the network settings so that they maximize the end-user quality. Due to video compression improvements of the newest video coding standard H.264/AVC, video streaming for low bit and frame rates is possible while preserving its perceptual quality. This is especially suitable for video applications in 3G wireless networks. Mobile video streaming is characterized by low resolutions and low bitrates. The commonly used resolutions are Quarter Common Intermediate Format (QCIF,176x144 pixels) for cell phones, Common Intermediate Format (CIF, 352x288 pixels) and Standard Interchange Format (SIF or QVGA, 320x240 pixels) for data-cards and palmtops (PDA). The mandatory codec for Universal Mobile Telecommunications System (UMTS) streaming ...

Ries, Michal — Vienna University of Technology


Exploiting Correlation Noise Modeling in Wyner-Ziv Video Coding

Wyner-Ziv (WZ) video coding is a particular case of distributed video coding, a new video coding paradigm based on the Slepian-Wolf and Wyner-Ziv theorems which mainly exploit the source correlation at the decoder and not only at the encoder as in predictive video coding. Therefore, this new coding paradigm may provide a flexible allocation of complexity between the encoder and the decoder and in-built channel error robustness, interesting features for emerging applications such as low-power video surveillance and visual sensor networks among others. Although some progress has been made in the last eight years, the rate-distortion performance of WZ video coding is still far from the maximum performance attained with predictive video coding. The WZ video coding compression efficiency depends critically on the capability to model the correlation noise between the original information at the encoder and its estimation generated ...

Brites, Catarina — Instituto Superior Tecnico (IST)


Nonlinear rate control techniques for constant bit rate MPEG video coders

Digital visual communication has been increasingly adopted as an efficient new medium in a variety of different fields; multi-media computers, digital televisions, telecommunications, etc. Exchange of visual information between remote sites requires that digital video is encoded by compressing the amount of data and transmitting it through specified network connections. The compression and transmission of digital video is an amalgamation of statistical data coding processes, which aims at efficient exchange of visual information without technical barriers due to different standards, services, media, etc. It is associated with a series of different disciplines of digital signal processing, each of which can be applied independently. It includes a few different technical principles; distortion, rate theory, prediction techniques and control theory. The MPEG (Moving Picture Experts Group) video compression standard is based on this paradigm, thus, it contains a variety of different coding ...

Saw, Yoo-Sok — University Of Edinburgh


Characterization and Quality Evaluation of Geometric Distortions in Images with Application to Digital Watermarking

The work of this thesis can be seen as a first step towards the characterization and quality evaluation of the class of local geometric distortions. A first step to solve the problems with geometric attacks is the characterization of the class of perceptually admissible distortions. This requires the development of models to treat the distortions from a mathematical point of view. In this context, the first part of the thesis focuses on modeling local geometric transformations from a mathematical point of view. Watermarking is not the only field where an analysis of geometric distortion in images would be useful. In all the applications dealing with geometric distortions the availability of an objective quality metric capable of dealing with this kind of distortions would be of invaluable help. Thus, in the second part of the thesis, two objective quality metrics for ...

D'Angelo, Angela — University of Siena


Synthetic test patterns and compression artefact distortion metrics for image codecs

This thesis presents a framework of test methodology to assess spatial domain compression artefacts produced by image and intra-frame coded video codecs. Few researchers have studied this broad range of artefacts. A taxonomy of image and video compression artefacts is proposed. This is based on the point of origin of the artefact in the image communication model. This thesis presents objective evaluation of distortions known as artefacts due to image and intra-frame coded video compression made using synthetic test patterns. The American National Standard Institute document ANSI T1 801 qualitatively defines blockiness, blur and ringing artefacts. These definitions have been augmented with quantitative definitions in conjunction with test patterns proposed. A test and measurement environment is proposed in which the codec under test is exercised using a portfolio of test patterns. The test patterns are designed to highlight the artefact ...

Punchihewa, Amal — Massey University, New Zealand


Security Issues and Collusion Attacks in Video Watermarking

Ten years after its infancy, digital watermarking is still considered as a young technology. Despite the fact that it has been introduced for security-related applications such as copyright protection, almost no study has been conducted to assert the survival of embedded watermarks in a hostile environment. In this thesis, it will be shown that this lack of evaluation has led to critical security pitfalls against statistical analysis, also referred to as collusion attacks. Such attacks typically consider several watermarked documents and combine them to produce unwatermarked content. This threat is all the more relevant when digital video is considered since each individual video frame can be regarded as a single watermarked document by itself. Next, several countermeasures are introduced to combat the highlighted weaknesses. In particular, motion compensated watermarking and signal coherent watermarking will be investigated to produce watermarks which ...

Doërr, Gwenaël — Institut Eurécom

The current layout is optimized for mobile phones. Page previews, thumbnails, and full abstracts will remain hidden until the browser window grows in width.

The current layout is optimized for tablet devices. Page previews and some thumbnails will remain hidden until the browser window grows in width.