On-board Processing for an Infrared Observatory

During the past two decades, image compression has developed from a mostly academic Rate-Distortion (R-D) field, into a highly commercial business. Various lossless and lossy image coding techniques have been developed. This thesis represents an interdisciplinary work between the field of astronomy and digital image processing and brings new aspects into both of the fields. In fact, image compression had its beginning in an American space program for efficient data storage. The goal of this research work is to recognize and develop new methods for space observatories and software tools to incorporate compression in space astronomy standards. While the astronomers benefit from new objective processing and analysis methods and improved efficiency and quality, for technicians a new field of application and research is opened. For validation of the processing results, the case of InfraRed (IR) astronomy has been specifically analyzed. ...

Belbachir, Ahmed Nabil — Vienna University of Technology


Toward sparse and geometry adapted video approximations

Video signals are sequences of natural images, where images are often modeled as piecewise-smooth signals. Hence, video can be seen as a 3D piecewise-smooth signal made of piecewise-smooth regions that move through time. Based on the piecewise-smooth model and on related theoretical work on rate-distortion performance of wavelet and oracle based coding schemes, one can better analyze the appropriate coding strategies that adaptive video codecs need to implement in order to be efficient. Efficient video representations for coding purposes require the use of adaptive signal decompositions able to capture appropriately the structure and redundancy appearing in video signals. Adaptivity needs to be such that it allows for proper modeling of signals in order to represent these with the lowest possible coding cost. Video is a very structured signal with high geometric content. This includes temporal geometry (normally represented by motion ...

Divorra Escoda, Oscar — EPFL / Signal Processing Institute


A flexible scalable video coding framework with adaptive spatio-temporal decompositions

The work presented in this thesis covers topics that extend the scalability functionalities in video coding and improve the compression performance. Two main novel approaches are presented, each targeting a different part of the scalable video coding (SVC) architecture: motion adaptive wavelet transform based on the wavelet transform in lifting implementation, and a design of a flexible framework for generalised spatio-temporal decomposition. Motion adaptive wavelet transform is based on the newly introduced concept of connectivity-map. The connectivity-map describes the underlying irregular structure of regularly sampled data. To enable a scalable representation of the connectivity-map, the corresponding analysis and synthesis operations have been derived. These are then employed to define a joint wavelet connectivity-map decomposition that serves as an adaptive alternative to the conventional wavelet decomposition. To demonstrate its applicability, the presented decomposition scheme is used in the proposed SVC framework, ...

Sprljan, Nikola — Queen Mary University of London


Image Quality Statistics and their use in Steganalysis and Compression

We categorize comprehensively image quality measures, extend measures defined for gray scale images to their multispectral case, and propose novel image quality measures. The statistical behavior of the measures and their sensitivity to various kinds of distortions, data hiding and coding artifacts are investigated via Analysis of Variance techniques. Their similarities or differences have been illustrated by plotting their Kohonen maps. Measures that give consistent scores across an image class and that are sensitive to distortions and coding artifacts are pointed out. We present techniques for steganalysis of images that have been potentially subjected to watermarking or steganographic algorithms. Our hypothesis is that watermarking and steganographic schemes leave statistical evidence that can be exploited for detection with the aid of image quality features and multivariate regression analysis. The steganalyzer is built using multivariate regression on the selected quality metrics. In ...

Avcibas, Ismail — Bogazici University


Contributions to Analysis and DSP-based Mitigation of Nonlinear Distortion in Radio Transceivers

This thesis focuses on different nonlinear distortion aspects in radio transmitter and receivers. Such nonlinear distortion aspects are generally becoming more and more important as the communication waveforms themselves get more complex and thus more sensitive to any distortion. Also balancing between the implementation costs, size, power consumption and radio performance, especially in multiradio devices, creates tendency towards using lower cost, and thus lower quality, radio electronics. Furthermore, increasing requirements on radio flexibility, especially on receiver side, reduces receiver radio frequency (RF) selectivity and thus increases the dynamic range and linearity requirements. Thus overall, proper understanding of nonlinear distortion in radio devices is essential, and also opens the door for clever use of digital signal processing (DSP) in mitigating and suppressing such distortion effects. On the receiver side, the emphasis in this thesis is mainly on the analysis and DSP ...

Shahed hagh ghadam, Ali — Tampere University of Technology


Synthetic test patterns and compression artefact distortion metrics for image codecs

This thesis presents a framework of test methodology to assess spatial domain compression artefacts produced by image and intra-frame coded video codecs. Few researchers have studied this broad range of artefacts. A taxonomy of image and video compression artefacts is proposed. This is based on the point of origin of the artefact in the image communication model. This thesis presents objective evaluation of distortions known as artefacts due to image and intra-frame coded video compression made using synthetic test patterns. The American National Standard Institute document ANSI T1 801 qualitatively defines blockiness, blur and ringing artefacts. These definitions have been augmented with quantitative definitions in conjunction with test patterns proposed. A test and measurement environment is proposed in which the codec under test is exercised using a portfolio of test patterns. The test patterns are designed to highlight the artefact ...

Punchihewa, Amal — Massey University, New Zealand


Traditional and Scalable Coding Techniques for Video Compression

In recent years, the usage of digital video has steadily been increasing. Since the amount of data for uncompressed digital video representation is very high, lossy source coding techniques are usually employed in digital video systems to compress that information and make it more suitable for storage and transmission. The source coding algorithms for video compression can be grouped into two big classes: the traditional and the scalable techniques. The goal of the traditional video coders is to maximize the compression efficiency corresponding to a given amount of compressed data. The goal of scalable video coding is instead to give a scalable representation of the source, such that subsets of it are able to describe in an optimal way the same video source but with reduced resolution in the temporal, spatial and/or quality domain. This thesis is focused on the ...

Cappellari, Lorenzo — University of Padova


Iterative Joint Source-Channel Coding Techniques for Single and Multiterminal Sources in Communication Networks

In a communication system it results undoubtedly of great interest to compress the information generated by the data sources to its most elementary representation, so that the amount of power necessary for reliable communications can be reduced. It is often the case that the redundancy shown by a wide variety of information sources can be modelled by taking into account the probabilistic dependance among consecutive source symbols rather than the probabilistic distribution of a single symbol. These sources are commonly referred to as single or multiterminal sources "with memory" being the memory, in this latter case, the existing temporal correlation among the consecutive symbol vectors generated by the multiterminal source. It is well known that, when the source has memory, the average amount of information per source symbol is given by the entropy rate, which is lower than its entropy ...

Del Ser, Javier — University of Navarra (TECNUN)


Analysis of electrophysiological measurements during stress monitoring

Work-related musculoskeletal disorders are a growing problem in todays society. These musculoskeletal disorders are caused by, amongst others, repetitive movements and mental stress. Stress is defined as the mismatch between a perceived demand and the perceived capacities to meet this demand. Although stress has a subjective origin, several physiological manifestations (e.g. cardiovascular and muscular) occur during periods of perceived stress. New insight and algorithms to extract information, related to stress are beneficial. Therefore, two series of stress experiments are executed in a laboratory environment, where subjects underwent different tasks inducing physical strain, mental stress and a combination of both. In this manuscript, new and modified algorithms for electromyography signals are presented that improve the individual analysis of electromyography signals. A first algorithm removes the interference of the electrical activity of the heart on singlechannel electromyography measurements. This interference signal is ...

Taelman, Joachim — KU Leuven


Dynamic Scheme Selection in Image Coding

This thesis deals with the coding of images with multiple coding schemes and their dynamic selection. In our society of information highways, electronic communication is taking everyday a bigger place in our lives. The number of transmitted images is also increasing everyday. Therefore, research on image compression is still an active area. However, the current trend is to add several functionalities to the compression scheme such as progressiveness for more comfortable browsing of web-sites or databases. Classical image coding schemes have a rigid structure. They usually process an image as a whole and treat the pixels as a simple signal with no particular characteristics. Second generation schemes use the concept of objects in an image, and introduce a model of the human visual system in the design of the coding scheme. Dynamic coding schemes, as their name tells us, make ...

Fleury, Pascal — Swiss Federal Institute of Technology


Efficient representation, generation and compression of digital holograms

Digital holography is a discipline of science that measures or reconstructs the wavefield of light by means of interference. The wavefield encodes three-dimensional information, which has many applications, such as interferometry, microscopy, non-destructive testing and data storage. Moreover, digital holography is emerging as a display technology. Holograms can recreate the wavefield of a 3D object, thereby reproducing all depth cues for all viewpoints, unlike current stereoscopic 3D displays. At high quality, the appearance of an object on a holographic display system becomes indistinguishable from a real one. High-quality holograms need large volumes of data to be represented, approaching resolutions of billions of pixels. For holographic videos, the data rates needed for transmitting and encoding of the raw holograms quickly become unfeasible with currently available hardware. Efficient generation and coding of holograms will be of utmost importance for future holographic displays. ...

Blinder, David — Vrije Universiteit Brussel


Distributed Video Coding for Wireless Lightweight Multimedia Applications

In the modern wireless age, lightweight multimedia technology stimulates attractive commercial applications on a grand scale as well as highly specialized niche markets. In this regard, the design of efficient video compression systems meeting such key requirements as very low encoding complexity, transmission error robustness and scalability, is no straightforward task. The answer can be found in fundamental information theoretic results, according to which efficient compression can be achieved by leveraging knowledge of the source statistics at the decoder only, giving rise to distributed, or alias Wyner-Ziv, video coding. This dissertation engineers efficient lightweight Wyner-Ziv video coding schemes emphasizing on several design aspects and applications. The first contribution of this dissertation focuses on the design of effective side information generation techniques so as to boost the compression capabilities of Wyner-Ziv video coding systems. To this end, overlapped block motion estimation ...

Deligiannis, Nikos — Vrije Universiteit Brussel


Low Complexity Image Recognition Algorithms for Handheld Devices

Content Based Image Retrieval (CBIR) has gained a lot of interest over the last two decades. The need to search and retrieve images from databases, based on information (“features”) extracted from the image itself, is becoming increasingly important. CBIR can be useful for handheld image recognition devices in which the image to be recognized is acquired with a camera, and thus there is no additional metadata associated to it. However, most CBIR systems require large computations, preventing their use in handheld devices. In this PhD work, we have developed low-complexity algorithms for content based image retrieval in handheld devices for camera acquired images. Two novel algorithms, ‘Color Density Circular Crop’ (CDCC) and ‘DCT-Phase Match’ (DCTPM), to perform image retrieval along with a two-stage image retrieval algorithm that combines CDCC and DCTPM, to achieve the low complexity required in handheld devices ...

Ayyalasomayajula, Pradyumna — EPFL


Robust Estimation and Model Order Selection for Signal Processing

In this thesis, advanced robust estimation methodologies for signal processing are developed and analyzed. The developed methodologies solve problems concerning multi-sensor data, robust model selection as well as robustness for dependent data. The work has been applied to solve practical signal processing problems in different areas of biomedical and array signal processing. In particular, for univariate independent data, a robust criterion is presented to select the model order with an application to corneal-height data modeling. The proposed criterion overcomes some limitations of existing robust criteria. For real-world data, it selects the radial model order of the Zernike polynomial of the corneal topography map in accordance with clinical expectations, even if the measurement conditions for the videokeratoscopy, which is the state-of-the-art method to collect corneal-height data, are poor. For multi-sensor data, robust model order selection selection criteria are proposed and applied ...

Muma, Michael — Technische Universität Darmstadt


Steganoflage: A New Image Steganography Algorithm

Steganography is the science that involves communicating secret data in an appropriate multimedia carrier, e.g., image, audio and video files. It comes under the assumption that if the feature is visible, the point of attack is evident, thus the goal here is always to conceal the very existence of the embedded data. It does not replace cryptography but rather boosts the security using its obscurity features. Steganography has various useful applications. However, like any other science it can be used for ill intentions. It has been propelled to the forefront of current security techniques by the remarkable growth in computational power, the increase in security awareness, e.g., individuals, groups, agencies, government and through intellectual pursuit. Steganography’s ultimate objectives, which are undetectability, robustness, resistance to various image processing methods and compression, and capacity of the hidden data, are the main factors ...

Cheddad Abbas — University of Ulster

The current layout is optimized for mobile phones. Page previews, thumbnails, and full abstracts will remain hidden until the browser window grows in width.

The current layout is optimized for tablet devices. Page previews and some thumbnails will remain hidden until the browser window grows in width.