Digital Forensic Techniques for Splicing Detection in Multimedia Contents

Visual and audio contents always played a key role in communications, because of their immediacy and presumed objectivity. This has become even more true in the digital era, and today it is common to have multimedia contents stand as proof of events. Digital contents, however, are also very easy to manipulate, thus calling for analysis methods devoted to uncover their processing history. Multimedia forensics is the science trying to answer questions about the past of a given image, audio or video file, questions like “which was the recording device?", or “is the content authentic?". In particular, authenticity assessment is a crucial task in many contexts, and it usually consists in determining whether the investigated object has been artificially created by splicing together different contents. In this thesis we address the problem of splicing detection in the three main media: image, ...

Fontani, Marco — Dept. of Information Engineering and Mathematics, University of Siena


On-board Processing for an Infrared Observatory

During the past two decades, image compression has developed from a mostly academic Rate-Distortion (R-D) field, into a highly commercial business. Various lossless and lossy image coding techniques have been developed. This thesis represents an interdisciplinary work between the field of astronomy and digital image processing and brings new aspects into both of the fields. In fact, image compression had its beginning in an American space program for efficient data storage. The goal of this research work is to recognize and develop new methods for space observatories and software tools to incorporate compression in space astronomy standards. While the astronomers benefit from new objective processing and analysis methods and improved efficiency and quality, for technicians a new field of application and research is opened. For validation of the processing results, the case of InfraRed (IR) astronomy has been specifically analyzed. ...

Belbachir, Ahmed Nabil — Vienna University of Technology


ROBUST WATERMARKING TECHNIQUES FOR SCALABLE CODED IMAGE AND VIDEO

In scalable image/video coding, high resolution content is encoded to the highest visual quality and the bit-streams are adapted to cater various communication channels, display devices and usage requirements. These content adaptations, which include quality, resolution and frame rate scaling may also affect the content protection data, such as, watermarks and are considered as a potential watermark attack. In this thesis, research on robust watermarking techniques for scalable coded image and video, are proposed and the improvements in robustness against various content adaptation attacks, such as, JPEG 2000 for image and Motion JPEG 2000, MC-EZBC and H.264/SVC for video, are reported. The spread spectrum domain, particularly wavelet-based image watermarking schemes often provides better robustness to compression attacks due to its multi-resolution decomposition and hence chosen for this work. A comprehensive and comparative analysis of the available wavelet-based watermarking schemes,is performed ...

Bhowmik, Deepayan — University of Sheffield


Vision models and quality metrics for image processing applications

Optimizing the performance of digital imaging systems with respect to the capture, display, storage and transmission of visual information represents one of the biggest challenges in the field of image and video processing. Taking into account the way humans perceive visual information can be greatly beneficial for this task. To achieve this, it is necessary to understand and model the human visual system, which is also the principal goal of this thesis. Computational models for different aspects of the visual system are developed, which can be used in a wide variety of image and video processing applications. The proposed models and metrics are shown to be consistent with human perception. The focus of this work is visual quality assessment. A perceptual distortion metric (PDM) for the evaluation of video quality is presented. It is based on a model of the ...

Winkler, Stefan — Swiss Federal Institute of Technology


Image Quality Statistics and their use in Steganalysis and Compression

We categorize comprehensively image quality measures, extend measures defined for gray scale images to their multispectral case, and propose novel image quality measures. The statistical behavior of the measures and their sensitivity to various kinds of distortions, data hiding and coding artifacts are investigated via Analysis of Variance techniques. Their similarities or differences have been illustrated by plotting their Kohonen maps. Measures that give consistent scores across an image class and that are sensitive to distortions and coding artifacts are pointed out. We present techniques for steganalysis of images that have been potentially subjected to watermarking or steganographic algorithms. Our hypothesis is that watermarking and steganographic schemes leave statistical evidence that can be exploited for detection with the aid of image quality features and multivariate regression analysis. The steganalyzer is built using multivariate regression on the selected quality metrics. In ...

Avcibas, Ismail — Bogazici University


Machine Learning Techniques for Image Forensics in Adversarial Setting

The use of machine-learning for multimedia forensics is gaining more and more consensus, especially due to the amazing possibilities offered by modern machine learning techniques. By exploiting deep learning tools, new approaches have been proposed whose performance remarkably exceed those achieved by state-of-the-art methods based on standard machine-learning and model-based techniques. However, the inherent vulnerability and fragility of machine learning architectures pose new serious security threats, hindering the use of these tools in security-oriented applications, and, among them, multimedia forensics. The analysis of the security of machine learning-based techniques in the presence of an adversary attempting to impede the forensic analysis, and the development of new solutions capable to improve the security of such techniques is then of primary importance, and, recently, has marked the birth of a new discipline, named Adversarial Machine Learning. By focusing on Image Forensics and ...

Nowroozi, Ehsan — Dept. of Information Engineering and Mathematics, University of Siena


Integration of human color vision models into high quality image compression

Strong academic and commercial interest in image compression has resulted in a number of sophisticated compression techniques. Some of these techniques have evolved into international standards such as JPEG. However, the widespread success of JPEG has slowed the rate of innovation in such standards. Even most recent techniques, such as those proposed in the JPEG2000 standard, do not show significantly improved compression performance; rather they increase the bitstream functionality. Nevertheless, the manifold of multimedia applications demands for further improvements in compression quality. The problem of stagnating compression quality can be overcome by exploiting the limitations of the human visual system (HVS) for compression purposes. To do so, commonly used distortion metrics such as mean-square error (MSE) are replaced by an HVS-model-based quality metric. Thus, the "visual" quality is optimized. Due to the tremendous complexity of the physiological structures involved in ...

Nadenau, Marcus J. — Swiss Federal Institute of Technology


Geometric Distortion in Image and Video Watermarking. Robustness and Perceptual Quality Impact

The main focus of this thesis is the problem of geometric distortion in image and video watermarking. In this thesis we discuss the two aspects of the geometric distortion problem, namely the watermark desynchronization aspect and the perceptual quality assessment aspect. Furthermore, this thesis also discusses the challenges of watermarking data compressed in low bit-rates. The main contributions of this thesis are: A watermarking algorithm suitable for low bit-rate video has been proposed. Two different approaches has been proposed to deal with the watermark desynchronization problem. A novel approach has been proposed to quantify the perceptual quality impact of geometric distortion.

Setyawan, Iwan — Delft University of Technology


Advanced Coding Technologies For Medical and Holographic Imaging: Algorithms, Implementations and Standardization

Medical and holographic imaging modalities produce large datasets that require efficient compression mechanisms for storage and transmission. This PhD dissertation proposes state-of-the-art technology extensions for JPEG coding standards to improve their performance in the aforementioned application domains. Modern hospitals rely heavily on volumetric images, such as produced by CT and MRI scanners. In fact, the completely digitized medical work flow, the improved imaging scanner technologies and the importance of volumetric image data sets have led to an exponentially increasing amount of data, raising the necessity for more efficient compression techniques with support for progressive quality and resolution scalability. For this type of imagery, a volumetric extension of the JPEG 2000 standard was created, called JP3D. In addition, improvements to JP3D, being alternative wavelet filters, directional wavelets and an intra-band prediction mode, were proposed and their applicability was evaluated. Holographic imaging, ...

Bruylants, Tim — Vrije Universiteit Brussel


Efficient representation, generation and compression of digital holograms

Digital holography is a discipline of science that measures or reconstructs the wavefield of light by means of interference. The wavefield encodes three-dimensional information, which has many applications, such as interferometry, microscopy, non-destructive testing and data storage. Moreover, digital holography is emerging as a display technology. Holograms can recreate the wavefield of a 3D object, thereby reproducing all depth cues for all viewpoints, unlike current stereoscopic 3D displays. At high quality, the appearance of an object on a holographic display system becomes indistinguishable from a real one. High-quality holograms need large volumes of data to be represented, approaching resolutions of billions of pixels. For holographic videos, the data rates needed for transmitting and encoding of the raw holograms quickly become unfeasible with currently available hardware. Efficient generation and coding of holograms will be of utmost importance for future holographic displays. ...

Blinder, David — Vrije Universiteit Brussel


Automatic Handwritten Signature Verification - Which features should be looked at?

The increasing need for personal authentication in many daily applications has made biometrics a fundamental research area. In particular, handwritten signatures have long been considered one of the most valuable biometric traits. Signatures are the most popular method for identity verification all over the world, and people are familiar with the use of signatures for identity verification purposes in their everyday life. In fact, signatures are widely used in several daily transactions, being recognized as a legal means of verifying an individual’s identity by financial and administrative institutions. In addition, signature verification has the advantage of being a non-invasive biometric technique. Two categories of signature verification systems can be distinguished taking into account the acquisition device, namely, offline systems, where only the static image of the signature is available, and online systems, where dynamic information acquired during the signing process, ...

Marianela Parodi — Universidad Nacional de Rosario


Biometric Sample Quality and Its Application to Multimodal Authentication Systems

This Thesis is focused on the quality assessment of biometric signals and its application to multimodal biometric systems. Since the establishment of biometrics as an specific research area in late 90s, the biometric community has focused its efforts in the development of accurate recognition algorithms and nowadays, biometric recognition is a mature technology that is used in many applications. However, we can notice recent studies that demonstrate how performance of biometric systems is heavily affected by the quality of biometric signals. Quality measurement has emerged in the biometric community as an important concern after the poor performance observed in biometric systems on certain pathological samples. We first summarize the state-of-the-art in the biometric quality problem. We present the factors influencing biometric quality, which mainly have to do with four issues: the individual itself, the sensor used in the acquisition, the ...

Alonso-Fernandez, Fernando — Universidad Politecnica de Madrid


Privacy Protecting Biometric Authentication Systems

As biometrics gains popularity and proliferates into the daily life, there is an increased concern over the loss of privacy and potential misuse of biometric data held in central repositories. The major concerns are about i) the use of biometrics to track people, ii) non-revocability of biometrics (eg. if a fingerprint is compromised it can not be canceled or reissued), and iii) disclosure of sensitive information such as race, gender and health problems which may be revealed by biometric traits. The straightforward suggestion of keeping the biometric data in a user owned token (eg. smart cards) does not completely solve the problem, since malicious users can claim that their token is broken to avoid biometric verification altogether. Put together, these concerns brought the need for privacy preserving biometric authentication methods in the recent years. In this dissertation, we survey existing ...

Kholmatov, Alisher — Sabanci University


Adaptive Nonlocal Signal Restoration and Enhancement Techniques for High-Dimensional Data

The large number of practical applications involving digital images has motivated a significant interest towards restoration solutions that improve the visual quality of the data under the presence of various acquisition and compression artifacts. Digital images are the results of an acquisition process based on the measurement of a physical quantity of interest incident upon an imaging sensor over a specified period of time. The quantity of interest depends on the targeted imaging application. Common imaging sensors measure the number of photons impinging over a dense grid of photodetectors in order to produce an image similar to what is perceived by the human visual system. Different applications focus on the part of the electromagnetic spectrum not visible by the human visual system, and thus require different sensing technologies to form the image. In all cases, even with the advance of ...

Maggioni, Matteo — Tampere University of Technology


Speech Watermarking and Air Traffic Control

Air traffic control (ATC) voice radio communication between aircraft pilots and controllers is subject to technical and functional constraints owing to the legacy radio system currently in use worldwide. This thesis investigates the embedding of digital side information, so called watermarks, into speech signals. Applied to the ATC voice radio, a watermarking system could overcome existing limitations, and ultimately increase safety, security and efficiency in ATC. In contrast to conventional watermarking methods, this field of application allows embedding of the data in perceptually irrelevant signal components. We show that the resulting theoretical watermark capacity far exceeds the capacity of conventional watermarking channels. Based on this finding, we present a general purpose blind speech watermarking algorithm that embeds watermark data in the phase of non-voiced speech segments by replacing the excitation signal of an autoregressive signal representation. Our implementation embeds the ...

Hofbauer, Konrad — Graz University

The current layout is optimized for mobile phones. Page previews, thumbnails, and full abstracts will remain hidden until the browser window grows in width.

The current layout is optimized for tablet devices. Page previews and some thumbnails will remain hidden until the browser window grows in width.