ROBUST WATERMARKING TECHNIQUES FOR SCALABLE CODED IMAGE AND VIDEO

In scalable image/video coding, high resolution content is encoded to the highest visual quality and the bit-streams are adapted to cater various communication channels, display devices and usage requirements. These content adaptations, which include quality, resolution and frame rate scaling may also affect the content protection data, such as, watermarks and are considered as a potential watermark attack. In this thesis, research on robust watermarking techniques for scalable coded image and video, are proposed and the improvements in robustness against various content adaptation attacks, such as, JPEG 2000 for image and Motion JPEG 2000, MC-EZBC and H.264/SVC for video, are reported. The spread spectrum domain, particularly wavelet-based image watermarking schemes often provides better robustness to compression attacks due to its multi-resolution decomposition and hence chosen for this work. A comprehensive and comparative analysis of the available wavelet-based watermarking schemes,is performed ...

Bhowmik, Deepayan — University of Sheffield


Active and Passive Approaches for Image Authentication

The generation and manipulation of digital images is made simple by widely available digital cameras and image processing software. As a consequence, we can no longer take the authenticity of a digital image for granted. This thesis investigates the problem of protecting the trustworthiness of digital images. Image authentication aims to verify the authenticity of a digital image. General solution of image authentication is based on digital signature or watermarking. A lot of studies have been conducted for image authentication, but thus far there has been no solution that could be robust enough to transmission errors during images transmission over lossy channels. On the other hand, digital image forensics is an emerging topic for passively assessing image authenticity, which works in the absence of any digital watermark or signature. This thesis focuses on how to assess the authenticity images when ...

Ye, Shuiming — National University of Singapore


Audio Watermarking, Steganalysis Using Audio Quality Metrics, and Robust Audio Hashing

We propose a technique for the problem of detecting the very presence of hidden messages in an audio object. The detector is based on the characteristics of the denoised residuals of the audio file. Our proposition is established upon the presupposition that the hidden message in a cover object leaves statistical evidence that can be detected with the use of some audio distortion measures. The distortions caused by hidden message are measured in terms of objective and perceptual quality metrics. The detector discriminates between cover and stego files using a selected subset of features and an SVM classifier. We have evaluated the detection performance of the proposed steganalysis technique with the well-known watermarking and steganographic methods. We present novel and robust audio fingerprinting techniques based on the summarization of the time-frequency spectral characteristics of an audio object. The perceptual hash ...

Ozer, Hamza — Bogazici University


MPEGII Video Coding For Noisy Channels

This thesis considers the performance of MPEG-II compressed video when transmitted over noisy channels, a subject of relevance to digital terrestrial television, video communication and mobile digital video. Results of bit sensitivity and resynchronisation sensitivity measurements are presented and techniques proposed for substantially improving the resilience of MPEG-II to transmission errors without the addition of any extra redundancy into the bitstream. It is errors in variable length encoded data which are found to cause the greatest artifacts as errors in these data can cause loss of bitstream synchronisation. The concept of a ‘black box transcoder’ is developed where MPEG-II is losslessly transcoded into a different structure for transmission. Bitstream resynchronisation is achieved using a technique known as error-resilient entropy coding (EREC). The error-resilience of differentially coded information is then improved by replacing the standard 1D-DPCM with a more resilient hierarchical ...

Swan, Robert — University of Cambridge


Integration of human color vision models into high quality image compression

Strong academic and commercial interest in image compression has resulted in a number of sophisticated compression techniques. Some of these techniques have evolved into international standards such as JPEG. However, the widespread success of JPEG has slowed the rate of innovation in such standards. Even most recent techniques, such as those proposed in the JPEG2000 standard, do not show significantly improved compression performance; rather they increase the bitstream functionality. Nevertheless, the manifold of multimedia applications demands for further improvements in compression quality. The problem of stagnating compression quality can be overcome by exploiting the limitations of the human visual system (HVS) for compression purposes. To do so, commonly used distortion metrics such as mean-square error (MSE) are replaced by an HVS-model-based quality metric. Thus, the "visual" quality is optimized. Due to the tremendous complexity of the physiological structures involved in ...

Nadenau, Marcus J. — Swiss Federal Institute of Technology


On-board Processing for an Infrared Observatory

During the past two decades, image compression has developed from a mostly academic Rate-Distortion (R-D) field, into a highly commercial business. Various lossless and lossy image coding techniques have been developed. This thesis represents an interdisciplinary work between the field of astronomy and digital image processing and brings new aspects into both of the fields. In fact, image compression had its beginning in an American space program for efficient data storage. The goal of this research work is to recognize and develop new methods for space observatories and software tools to incorporate compression in space astronomy standards. While the astronomers benefit from new objective processing and analysis methods and improved efficiency and quality, for technicians a new field of application and research is opened. For validation of the processing results, the case of InfraRed (IR) astronomy has been specifically analyzed. ...

Belbachir, Ahmed Nabil — Vienna University of Technology


WATERMARKING FOR 3D REPRESENTATIONS

In this thesis, a number of novel watermarking techniques for different 3D representations are presented. A novel watermarking method is proposed for the mono-view video, which might be interpreted as the basic implicit representation of 3D scenes. The proposed method solves the common flickering problem in the existing video watermarking schemes by means of adjusting the watermark strength with respect to temporal contrast thresholds of human visual system (HVS), which define the maximum invisible distortions in the temporal direction. The experimental results indicate that the proposed method gives better results in both objective and subjective measures, compared to some recognized methods in the literature. The watermarking techniques for the geometry and image based representations of 3D scenes, denoted as 3D watermarking, are examined and classified into three groups, as 3D-3D, 3D-2D and 2D-2D watermarking, in which the pair of symbols ...

Koz, Alper — Middle East Technical University, Department of Electrical and Electronics Engineering


A flexible scalable video coding framework with adaptive spatio-temporal decompositions

The work presented in this thesis covers topics that extend the scalability functionalities in video coding and improve the compression performance. Two main novel approaches are presented, each targeting a different part of the scalable video coding (SVC) architecture: motion adaptive wavelet transform based on the wavelet transform in lifting implementation, and a design of a flexible framework for generalised spatio-temporal decomposition. Motion adaptive wavelet transform is based on the newly introduced concept of connectivity-map. The connectivity-map describes the underlying irregular structure of regularly sampled data. To enable a scalable representation of the connectivity-map, the corresponding analysis and synthesis operations have been derived. These are then employed to define a joint wavelet connectivity-map decomposition that serves as an adaptive alternative to the conventional wavelet decomposition. To demonstrate its applicability, the presented decomposition scheme is used in the proposed SVC framework, ...

Sprljan, Nikola — Queen Mary University of London


Vision models and quality metrics for image processing applications

Optimizing the performance of digital imaging systems with respect to the capture, display, storage and transmission of visual information represents one of the biggest challenges in the field of image and video processing. Taking into account the way humans perceive visual information can be greatly beneficial for this task. To achieve this, it is necessary to understand and model the human visual system, which is also the principal goal of this thesis. Computational models for different aspects of the visual system are developed, which can be used in a wide variety of image and video processing applications. The proposed models and metrics are shown to be consistent with human perception. The focus of this work is visual quality assessment. A perceptual distortion metric (PDM) for the evaluation of video quality is presented. It is based on a model of the ...

Winkler, Stefan — Swiss Federal Institute of Technology


Dynamic Scheme Selection in Image Coding

This thesis deals with the coding of images with multiple coding schemes and their dynamic selection. In our society of information highways, electronic communication is taking everyday a bigger place in our lives. The number of transmitted images is also increasing everyday. Therefore, research on image compression is still an active area. However, the current trend is to add several functionalities to the compression scheme such as progressiveness for more comfortable browsing of web-sites or databases. Classical image coding schemes have a rigid structure. They usually process an image as a whole and treat the pixels as a simple signal with no particular characteristics. Second generation schemes use the concept of objects in an image, and introduce a model of the human visual system in the design of the coding scheme. Dynamic coding schemes, as their name tells us, make ...

Fleury, Pascal — Swiss Federal Institute of Technology


Adaptive Nonlocal Signal Restoration and Enhancement Techniques for High-Dimensional Data

The large number of practical applications involving digital images has motivated a significant interest towards restoration solutions that improve the visual quality of the data under the presence of various acquisition and compression artifacts. Digital images are the results of an acquisition process based on the measurement of a physical quantity of interest incident upon an imaging sensor over a specified period of time. The quantity of interest depends on the targeted imaging application. Common imaging sensors measure the number of photons impinging over a dense grid of photodetectors in order to produce an image similar to what is perceived by the human visual system. Different applications focus on the part of the electromagnetic spectrum not visible by the human visual system, and thus require different sensing technologies to form the image. In all cases, even with the advance of ...

Maggioni, Matteo — Tampere University of Technology


Watermark-based error concealment algorithms for low bit rate video communications

In this work, a novel set of robust watermark-based error concealment (WEC) algorithms are proposed. Watermarking is used to introduce redundancy to the transmitted data with little or no increase in its bit rate during transmission. The proposed algorithms involve generating a low resolution version of a video frame and seamlessly embedding it as a watermark in the frame itself during encoding. At the receiver, the watermark is extracted from the reconstructed frame and the lost information is recovered using the extracted watermark signal, thus enhancing its perceptual quality. Three DCT-based spread spectrum watermark embedding techniques are presented in this work. The first technique uses a multiplicative Gaussian pseudo-noise with a pre-defined spreading gain and fixed chip rate. The second one is its adaptively scaled version and the third technique uses informed watermarking. Two versions of the low resolution reference, ...

Adsumilli, Chowdary — University of California, Santa Barbara


Tradeoffs and limitations in statistically based image reconstruction problems

Advanced nuclear medical imaging systems collect multiple attributes of a large number of photon events, resulting in extremely large datasets which present challenges to image reconstruction and assessment. This dissertation addresses several of these challenges. The image formation process in nuclear medical imaging can be posed as a parametric estimation problem where the image pixels are the parameters of interest. Since nuclear medical imaging applications are often ill-posed inverse problems, unbiased estimators result in very noisy, high-variance images. Typically, smoothness constraints and a priori information are used to reduce variance in medical imaging applications at the cost of biasing the estimator. For such problems, there exists an inherent tradeoff between the recovered spatial resolution of an estimator, overall bias, and its statistical variance; lower variance can only be bought at the price of decreased spatial resolution and/or increased overall bias. ...

Kragh, Tom — University of Michigan


Lossless compression of images with specific characteristics

The compression of some types of images is a challenge for some standard compression techniques. This thesis investigates the lossless compression of images with specific characteristics, namely simple images, color-indexed images and microarray images. We are interested in the development of complete compression methods and in the study of preprocessing algorithms that could be used together with standard compression methods. The histogram sparseness, a property of simple images, is addressed in this thesis. We developed a preprocessing technique, denoted histogram packing, that explores this property and can be used with standard compression methods for improving significantly their efficiency. Histogram packing and palette reordering algorithms can be used as a preprocessing step for improving the lossless compression of color-indexed images. This thesis presents several algorithms and a comprehensive study of the already existing methods. Specific compression methods, such as binary tree ...

Neves, António J. R. — University of Aveiro, Dep. of Electronics, Telecomunications and Informatics


Geometric Distortion in Image and Video Watermarking. Robustness and Perceptual Quality Impact

The main focus of this thesis is the problem of geometric distortion in image and video watermarking. In this thesis we discuss the two aspects of the geometric distortion problem, namely the watermark desynchronization aspect and the perceptual quality assessment aspect. Furthermore, this thesis also discusses the challenges of watermarking data compressed in low bit-rates. The main contributions of this thesis are: A watermarking algorithm suitable for low bit-rate video has been proposed. Two different approaches has been proposed to deal with the watermark desynchronization problem. A novel approach has been proposed to quantify the perceptual quality impact of geometric distortion.

Setyawan, Iwan — Delft University of Technology

The current layout is optimized for mobile phones. Page previews, thumbnails, and full abstracts will remain hidden until the browser window grows in width.

The current layout is optimized for tablet devices. Page previews and some thumbnails will remain hidden until the browser window grows in width.