Complexity related aspects of image compression (2001)
On-board Processing for an Infrared Observatory
During the past two decades, image compression has developed from a mostly academic Rate-Distortion (R-D) field, into a highly commercial business. Various lossless and lossy image coding techniques have been developed. This thesis represents an interdisciplinary work between the field of astronomy and digital image processing and brings new aspects into both of the fields. In fact, image compression had its beginning in an American space program for efficient data storage. The goal of this research work is to recognize and develop new methods for space observatories and software tools to incorporate compression in space astronomy standards. While the astronomers benefit from new objective processing and analysis methods and improved efficiency and quality, for technicians a new field of application and research is opened. For validation of the processing results, the case of InfraRed (IR) astronomy has been specifically analyzed. ...
Belbachir, Ahmed Nabil — Vienna University of Technology
Toward sparse and geometry adapted video approximations
Video signals are sequences of natural images, where images are often modeled as piecewise-smooth signals. Hence, video can be seen as a 3D piecewise-smooth signal made of piecewise-smooth regions that move through time. Based on the piecewise-smooth model and on related theoretical work on rate-distortion performance of wavelet and oracle based coding schemes, one can better analyze the appropriate coding strategies that adaptive video codecs need to implement in order to be efficient. Efficient video representations for coding purposes require the use of adaptive signal decompositions able to capture appropriately the structure and redundancy appearing in video signals. Adaptivity needs to be such that it allows for proper modeling of signals in order to represent these with the lowest possible coding cost. Video is a very structured signal with high geometric content. This includes temporal geometry (normally represented by motion ...
Divorra Escoda, Oscar — EPFL / Signal Processing Institute
A flexible scalable video coding framework with adaptive spatio-temporal decompositions
The work presented in this thesis covers topics that extend the scalability functionalities in video coding and improve the compression performance. Two main novel approaches are presented, each targeting a different part of the scalable video coding (SVC) architecture: motion adaptive wavelet transform based on the wavelet transform in lifting implementation, and a design of a flexible framework for generalised spatio-temporal decomposition. Motion adaptive wavelet transform is based on the newly introduced concept of connectivity-map. The connectivity-map describes the underlying irregular structure of regularly sampled data. To enable a scalable representation of the connectivity-map, the corresponding analysis and synthesis operations have been derived. These are then employed to define a joint wavelet connectivity-map decomposition that serves as an adaptive alternative to the conventional wavelet decomposition. To demonstrate its applicability, the presented decomposition scheme is used in the proposed SVC framework, ...
Sprljan, Nikola — Queen Mary University of London
Image Quality Statistics and their use in Steganalysis and Compression
We categorize comprehensively image quality measures, extend measures defined for gray scale images to their multispectral case, and propose novel image quality measures. The statistical behavior of the measures and their sensitivity to various kinds of distortions, data hiding and coding artifacts are investigated via Analysis of Variance techniques. Their similarities or differences have been illustrated by plotting their Kohonen maps. Measures that give consistent scores across an image class and that are sensitive to distortions and coding artifacts are pointed out. We present techniques for steganalysis of images that have been potentially subjected to watermarking or steganographic algorithms. Our hypothesis is that watermarking and steganographic schemes leave statistical evidence that can be exploited for detection with the aid of image quality features and multivariate regression analysis. The steganalyzer is built using multivariate regression on the selected quality metrics. In ...
Avcibas, Ismail — Bogazici University
Synthetic test patterns and compression artefact distortion metrics for image codecs
This thesis presents a framework of test methodology to assess spatial domain compression artefacts produced by image and intra-frame coded video codecs. Few researchers have studied this broad range of artefacts. A taxonomy of image and video compression artefacts is proposed. This is based on the point of origin of the artefact in the image communication model. This thesis presents objective evaluation of distortions known as artefacts due to image and intra-frame coded video compression made using synthetic test patterns. The American National Standard Institute document ANSI T1 801 qualitatively defines blockiness, blur and ringing artefacts. These definitions have been augmented with quantitative definitions in conjunction with test patterns proposed. A test and measurement environment is proposed in which the codec under test is exercised using a portfolio of test patterns. The test patterns are designed to highlight the artefact ...
Punchihewa, Amal — Massey University, New Zealand
Contributions to Analysis and DSP-based Mitigation of Nonlinear Distortion in Radio Transceivers
This thesis focuses on different nonlinear distortion aspects in radio transmitter and receivers. Such nonlinear distortion aspects are generally becoming more and more important as the communication waveforms themselves get more complex and thus more sensitive to any distortion. Also balancing between the implementation costs, size, power consumption and radio performance, especially in multiradio devices, creates tendency towards using lower cost, and thus lower quality, radio electronics. Furthermore, increasing requirements on radio flexibility, especially on receiver side, reduces receiver radio frequency (RF) selectivity and thus increases the dynamic range and linearity requirements. Thus overall, proper understanding of nonlinear distortion in radio devices is essential, and also opens the door for clever use of digital signal processing (DSP) in mitigating and suppressing such distortion effects. On the receiver side, the emphasis in this thesis is mainly on the analysis and DSP ...
Shahed hagh ghadam, Ali — Tampere University of Technology
Traditional and Scalable Coding Techniques for Video Compression
In recent years, the usage of digital video has steadily been increasing. Since the amount of data for uncompressed digital video representation is very high, lossy source coding techniques are usually employed in digital video systems to compress that information and make it more suitable for storage and transmission. The source coding algorithms for video compression can be grouped into two big classes: the traditional and the scalable techniques. The goal of the traditional video coders is to maximize the compression efficiency corresponding to a given amount of compressed data. The goal of scalable video coding is instead to give a scalable representation of the source, such that subsets of it are able to describe in an optimal way the same video source but with reduced resolution in the temporal, spatial and/or quality domain. This thesis is focused on the ...
Cappellari, Lorenzo — University of Padova
In a communication system it results undoubtedly of great interest to compress the information generated by the data sources to its most elementary representation, so that the amount of power necessary for reliable communications can be reduced. It is often the case that the redundancy shown by a wide variety of information sources can be modelled by taking into account the probabilistic dependance among consecutive source symbols rather than the probabilistic distribution of a single symbol. These sources are commonly referred to as single or multiterminal sources "with memory" being the memory, in this latter case, the existing temporal correlation among the consecutive symbol vectors generated by the multiterminal source. It is well known that, when the source has memory, the average amount of information per source symbol is given by the entropy rate, which is lower than its entropy ...
Del Ser, Javier — University of Navarra (TECNUN)
Techniques for improving the performance of distributed video coding
Distributed Video Coding (DVC) is a recently proposed paradigm in video communication, which fits well emerging applications such as wireless video surveillance, multimedia sensor networks, wireless PC cameras, and mobile cameras phones. These applications require a low complexity encoding, while possibly affording a high complexity decoding. DVC presents several advantages: First, the complexity can be distributed between the encoder and the decoder. Second, the DVC is robust to errors, since it uses a channel code. In DVC, a Side Information (SI) is estimated at the decoder, using the available decoded frames, and used for the decoding and reconstruction of other frames. In this Ph.D thesis, we propose new techniques in order to improve the quality of the SI. First, successive refinement of the SI is performed after each decoded DCT band, using a Partially Decoded WZF (PDWZF), along with the ...
Abou-Elailah, Abdalbassir — Telecom Paristech
Analysis of electrophysiological measurements during stress monitoring
Work-related musculoskeletal disorders are a growing problem in todays society. These musculoskeletal disorders are caused by, amongst others, repetitive movements and mental stress. Stress is defined as the mismatch between a perceived demand and the perceived capacities to meet this demand. Although stress has a subjective origin, several physiological manifestations (e.g. cardiovascular and muscular) occur during periods of perceived stress. New insight and algorithms to extract information, related to stress are beneficial. Therefore, two series of stress experiments are executed in a laboratory environment, where subjects underwent different tasks inducing physical strain, mental stress and a combination of both. In this manuscript, new and modified algorithms for electromyography signals are presented that improve the individual analysis of electromyography signals. A first algorithm removes the interference of the electrical activity of the heart on singlechannel electromyography measurements. This interference signal is ...
Taelman, Joachim — KU Leuven
Robust Estimation and Model Order Selection for Signal Processing
In this thesis, advanced robust estimation methodologies for signal processing are developed and analyzed. The developed methodologies solve problems concerning multi-sensor data, robust model selection as well as robustness for dependent data. The work has been applied to solve practical signal processing problems in different areas of biomedical and array signal processing. In particular, for univariate independent data, a robust criterion is presented to select the model order with an application to corneal-height data modeling. The proposed criterion overcomes some limitations of existing robust criteria. For real-world data, it selects the radial model order of the Zernike polynomial of the corneal topography map in accordance with clinical expectations, even if the measurement conditions for the videokeratoscopy, which is the state-of-the-art method to collect corneal-height data, are poor. For multi-sensor data, robust model order selection selection criteria are proposed and applied ...
Muma, Michael — Technische Universität Darmstadt
Dynamic Scheme Selection in Image Coding
This thesis deals with the coding of images with multiple coding schemes and their dynamic selection. In our society of information highways, electronic communication is taking everyday a bigger place in our lives. The number of transmitted images is also increasing everyday. Therefore, research on image compression is still an active area. However, the current trend is to add several functionalities to the compression scheme such as progressiveness for more comfortable browsing of web-sites or databases. Classical image coding schemes have a rigid structure. They usually process an image as a whole and treat the pixels as a simple signal with no particular characteristics. Second generation schemes use the concept of objects in an image, and introduce a model of the human visual system in the design of the coding scheme. Dynamic coding schemes, as their name tells us, make ...
Fleury, Pascal — Swiss Federal Institute of Technology
Contributions to signal analysis and processing using compressed sensing techniques
Chapter 2 contains a short introduction to the fundamentals of compressed sensing theory, which is the larger context of this thesis. We start with introducing the key concepts of sparsity and sparse representations of signals. We discuss the central problem of compressed sensing, i.e. how to adequately recover sparse signals from a small number of measurements, as well as the multiple formulations of the reconstruction problem. A large part of the chapter is devoted to some of the most important conditions necessary and/or sufficient to guarantee accurate recovery. The aim is to introduce the reader to the basic results, without the burden of detailed proofs. In addition, we also present a few of the popular reconstruction and optimization algorithms that we use throughout the thesis. Chapter 3 presents an alternative sparsity model known as analysis sparsity, that offers similar recovery ...
Cleju, Nicolae — "Gheorghe Asachi" Technical University of Iasi
Robust and multiresolution video delivery : From H.26x to Matching pursuit based technologies
With the joint development of networking and digital coding technologies multimedia and more particularly video services are clearly becoming one of the major consumers of the new information networks. The rapid growth of the Internet and computer industry however results in a very heterogeneous infrastructure commonly overloaded. Video service providers have nevertheless to oer to their clients the best possible quality according to their respective capabilities and communication channel status. The Quality of Service is not only inuenced by the compression artifacts, but also by unavoidable packet losses. Hence, the packet video stream has clearly to fulll possibly contradictory requirements, that are coding eciency and robustness to data loss. The rst contribution of this thesis is the complete modeling of the video Quality of Service (QoS) in standard and more particularly MPEG-2 applications. The performance of Forward Error Control (FEC) ...
Frossard, Pascal — Swiss Federal Institute of Technology
Distributed Video Coding for Wireless Lightweight Multimedia Applications
In the modern wireless age, lightweight multimedia technology stimulates attractive commercial applications on a grand scale as well as highly specialized niche markets. In this regard, the design of efficient video compression systems meeting such key requirements as very low encoding complexity, transmission error robustness and scalability, is no straightforward task. The answer can be found in fundamental information theoretic results, according to which efficient compression can be achieved by leveraging knowledge of the source statistics at the decoder only, giving rise to distributed, or alias Wyner-Ziv, video coding. This dissertation engineers efficient lightweight Wyner-Ziv video coding schemes emphasizing on several design aspects and applications. The first contribution of this dissertation focuses on the design of effective side information generation techniques so as to boost the compression capabilities of Wyner-Ziv video coding systems. To this end, overlapped block motion estimation ...
Deligiannis, Nikos — Vrije Universiteit Brussel
The current layout is optimized for mobile phones. Page previews, thumbnails, and full abstracts will remain hidden until the browser window grows in width.
The current layout is optimized for tablet devices. Page previews and some thumbnails will remain hidden until the browser window grows in width.