Integration of human color vision models into high quality image compression (2000)
Vision models and quality metrics for image processing applications
Optimizing the performance of digital imaging systems with respect to the capture, display, storage and transmission of visual information represents one of the biggest challenges in the field of image and video processing. Taking into account the way humans perceive visual information can be greatly beneficial for this task. To achieve this, it is necessary to understand and model the human visual system, which is also the principal goal of this thesis. Computational models for different aspects of the visual system are developed, which can be used in a wide variety of image and video processing applications. The proposed models and metrics are shown to be consistent with human perception. The focus of this work is visual quality assessment. A perceptual distortion metric (PDM) for the evaluation of video quality is presented. It is based on a model of the ...
Winkler, Stefan — Swiss Federal Institute of Technology
On-board Processing for an Infrared Observatory
During the past two decades, image compression has developed from a mostly academic Rate-Distortion (R-D) field, into a highly commercial business. Various lossless and lossy image coding techniques have been developed. This thesis represents an interdisciplinary work between the field of astronomy and digital image processing and brings new aspects into both of the fields. In fact, image compression had its beginning in an American space program for efficient data storage. The goal of this research work is to recognize and develop new methods for space observatories and software tools to incorporate compression in space astronomy standards. While the astronomers benefit from new objective processing and analysis methods and improved efficiency and quality, for technicians a new field of application and research is opened. For validation of the processing results, the case of InfraRed (IR) astronomy has been specifically analyzed. ...
Belbachir, Ahmed Nabil — Vienna University of Technology
An analysis of the ergonomic quality of the current standards for the visual display quality leads to a number of recommendations for the development of new international standards: - Separation for different types of users, esp. display designers, purchasers, and end users, -Independence of display technology to allow comparison, -Modular construction with several quality grades to allow benchmarking for different types of applications, -A test method for the end user standard that can be performed at the place of work, to take into account the effects of wear and drift of components and to be able to correct suboptimal configurations. The separate parameters that exert influence on the image quality of a broad category of images in the context of use, and their mutual coherence within the cycle of evaluation and adaptation of image quality are presented in the "Image ...
Besuijen, Jacobus — Delft University of Technology
Medical and holographic imaging modalities produce large datasets that require efficient compression mechanisms for storage and transmission. This PhD dissertation proposes state-of-the-art technology extensions for JPEG coding standards to improve their performance in the aforementioned application domains. Modern hospitals rely heavily on volumetric images, such as produced by CT and MRI scanners. In fact, the completely digitized medical work flow, the improved imaging scanner technologies and the importance of volumetric image data sets have led to an exponentially increasing amount of data, raising the necessity for more efficient compression techniques with support for progressive quality and resolution scalability. For this type of imagery, a volumetric extension of the JPEG 2000 standard was created, called JP3D. In addition, improvements to JP3D, being alternative wavelet filters, directional wavelets and an intra-band prediction mode, were proposed and their applicability was evaluated. Holographic imaging, ...
Bruylants, Tim — Vrije Universiteit Brussel
Synthetic test patterns and compression artefact distortion metrics for image codecs
This thesis presents a framework of test methodology to assess spatial domain compression artefacts produced by image and intra-frame coded video codecs. Few researchers have studied this broad range of artefacts. A taxonomy of image and video compression artefacts is proposed. This is based on the point of origin of the artefact in the image communication model. This thesis presents objective evaluation of distortions known as artefacts due to image and intra-frame coded video compression made using synthetic test patterns. The American National Standard Institute document ANSI T1 801 qualitatively defines blockiness, blur and ringing artefacts. These definitions have been augmented with quantitative definitions in conjunction with test patterns proposed. A test and measurement environment is proposed in which the codec under test is exercised using a portfolio of test patterns. The test patterns are designed to highlight the artefact ...
Punchihewa, Amal — Massey University, New Zealand
Active and Passive Approaches for Image Authentication
The generation and manipulation of digital images is made simple by widely available digital cameras and image processing software. As a consequence, we can no longer take the authenticity of a digital image for granted. This thesis investigates the problem of protecting the trustworthiness of digital images. Image authentication aims to verify the authenticity of a digital image. General solution of image authentication is based on digital signature or watermarking. A lot of studies have been conducted for image authentication, but thus far there has been no solution that could be robust enough to transmission errors during images transmission over lossy channels. On the other hand, digital image forensics is an emerging topic for passively assessing image authenticity, which works in the absence of any digital watermark or signature. This thesis focuses on how to assess the authenticity images when ...
Ye, Shuiming — National University of Singapore
The subject of the thesis is the emergence and analysis of visual texture microstructure for efficient modeling, descriptive feature extraction and image representation. Main objectives are the problems of image texture modeling and analysis in Computer Vision systems, with emphasis on the subproblems of texture detection, segmentation and separation in images. Advanced modeling and analysis methods are developed in parallel directions: a) Multiband models of narrowband components and spatial modulations, b) Energy methods for texture feature extraction, c) Variational techniques of image decomposition and texture separation. The proposed methods are applied on a database of digitized soilsection images to quantify and evaluate the biological quality of soils and in different types and collections of natural images. The developed model is the common ground to approach texture in its different forms and applications. In total, a complete system for texture processing ...
Evangelopoulos, Georgios — National Technical University of Athens
Machine Learning Techniques for Image Forensics in Adversarial Setting
The use of machine-learning for multimedia forensics is gaining more and more consensus, especially due to the amazing possibilities offered by modern machine learning techniques. By exploiting deep learning tools, new approaches have been proposed whose performance remarkably exceed those achieved by state-of-the-art methods based on standard machine-learning and model-based techniques. However, the inherent vulnerability and fragility of machine learning architectures pose new serious security threats, hindering the use of these tools in security-oriented applications, and, among them, multimedia forensics. The analysis of the security of machine learning-based techniques in the presence of an adversary attempting to impede the forensic analysis, and the development of new solutions capable to improve the security of such techniques is then of primary importance, and, recently, has marked the birth of a new discipline, named Adversarial Machine Learning. By focusing on Image Forensics and ...
Nowroozi, Ehsan — Dept. of Information Engineering and Mathematics, University of Siena
Planar 3D Scene Representations for Depth Compression
The recent invasion of stereoscopic 3D television technologies is expected to be followed by autostereoscopic and holographic technologies. Glasses-free multiple stereoscopic pair displaying capabilities of these technologies will advance the 3D experience. The prospective 3D format to create the multiple views for such displays is Multiview Video plus Depth (MVD) format based on the Depth Image Based Rendering (DIBR) techniques. The depth modality of the MVD format is an active research area whose main objective is to develop DIBR friendly efficient compression methods. As a part this research, the thesis proposes novel 3D planar-based depth representations. The planar approximation of the stereo depth images is formulated as an energy-based co-segmentation problem by a Markov Random Field model. The energy terms of this problem are designed to mimic the rate-distortion tradeoff for a depth compression application. A heuristic algorithm is developed ...
Özkalaycı, Burak Oğuz — Middle East Technical University
ROBUST WATERMARKING TECHNIQUES FOR SCALABLE CODED IMAGE AND VIDEO
In scalable image/video coding, high resolution content is encoded to the highest visual quality and the bit-streams are adapted to cater various communication channels, display devices and usage requirements. These content adaptations, which include quality, resolution and frame rate scaling may also affect the content protection data, such as, watermarks and are considered as a potential watermark attack. In this thesis, research on robust watermarking techniques for scalable coded image and video, are proposed and the improvements in robustness against various content adaptation attacks, such as, JPEG 2000 for image and Motion JPEG 2000, MC-EZBC and H.264/SVC for video, are reported. The spread spectrum domain, particularly wavelet-based image watermarking schemes often provides better robustness to compression attacks due to its multi-resolution decomposition and hence chosen for this work. A comprehensive and comparative analysis of the available wavelet-based watermarking schemes,is performed ...
Bhowmik, Deepayan — University of Sheffield
Modeling Perceived Quality for Imaging Applications
People of all generations are making more and more use of digital imaging systems in their daily lives. The image content rendered by these digital imaging systems largely differs in perceived quality depending on the system and its applications. To be able to optimize the experience of viewers of this content understanding and modeling perceived image quality is essential. Research on modeling image quality in a full-reference framework --- where the original content can be used as a reference --- is well established in literature. In many current applications, however, the perceived image quality needs to be modeled in a no-reference framework at real-time. As a consequence, the model needs to quantitatively predict perceived quality of a degraded image without being able to compare it to its original version, and has to achieve this with limited computational complexity in order ...
Liu, Hantao — Delft University of Technology
Dynamic Scheme Selection in Image Coding
This thesis deals with the coding of images with multiple coding schemes and their dynamic selection. In our society of information highways, electronic communication is taking everyday a bigger place in our lives. The number of transmitted images is also increasing everyday. Therefore, research on image compression is still an active area. However, the current trend is to add several functionalities to the compression scheme such as progressiveness for more comfortable browsing of web-sites or databases. Classical image coding schemes have a rigid structure. They usually process an image as a whole and treat the pixels as a simple signal with no particular characteristics. Second generation schemes use the concept of objects in an image, and introduce a model of the human visual system in the design of the coding scheme. Dynamic coding schemes, as their name tells us, make ...
Fleury, Pascal — Swiss Federal Institute of Technology
Lossless compression of images with specific characteristics
The compression of some types of images is a challenge for some standard compression techniques. This thesis investigates the lossless compression of images with specific characteristics, namely simple images, color-indexed images and microarray images. We are interested in the development of complete compression methods and in the study of preprocessing algorithms that could be used together with standard compression methods. The histogram sparseness, a property of simple images, is addressed in this thesis. We developed a preprocessing technique, denoted histogram packing, that explores this property and can be used with standard compression methods for improving significantly their efficiency. Histogram packing and palette reordering algorithms can be used as a preprocessing step for improving the lossless compression of color-indexed images. This thesis presents several algorithms and a comprehensive study of the already existing methods. Specific compression methods, such as binary tree ...
Neves, António J. R. — University of Aveiro, Dep. of Electronics, Telecomunications and Informatics
In this doctoral thesis several scale-free texture segmentation procedures based on two fractal attributes, the Hölder exponent, measuring the local regularity of a texture, and local variance, are proposed.A piecewise homogeneous fractal texture model is built, along with a synthesis procedure, providing images composed of the aggregation of fractal texture patches with known attributes and segmentation. This synthesis procedure is used to evaluate the proposed methods performance.A first method, based on the Total Variation regularization of a noisy estimate of local regularity, is illustrated and refined thanks to a post-processing step consisting in an iterative thresholding and resulting in a segmentation.After evidencing the limitations of this first approach, deux segmentation methods, with either "free" or "co-located" contours, are built, taking in account jointly the local regularity and the local variance.These two procedures are formulated as convex nonsmooth functional minimization problems.We ...
Pascal, Barbara — École Normale Supérieure de Lyon
Efficient representation, generation and compression of digital holograms
Digital holography is a discipline of science that measures or reconstructs the wavefield of light by means of interference. The wavefield encodes three-dimensional information, which has many applications, such as interferometry, microscopy, non-destructive testing and data storage. Moreover, digital holography is emerging as a display technology. Holograms can recreate the wavefield of a 3D object, thereby reproducing all depth cues for all viewpoints, unlike current stereoscopic 3D displays. At high quality, the appearance of an object on a holographic display system becomes indistinguishable from a real one. High-quality holograms need large volumes of data to be represented, approaching resolutions of billions of pixels. For holographic videos, the data rates needed for transmitting and encoding of the raw holograms quickly become unfeasible with currently available hardware. Efficient generation and coding of holograms will be of utmost importance for future holographic displays. ...
Blinder, David — Vrije Universiteit Brussel
The current layout is optimized for mobile phones. Page previews, thumbnails, and full abstracts will remain hidden until the browser window grows in width.
The current layout is optimized for tablet devices. Page previews and some thumbnails will remain hidden until the browser window grows in width.