On-board Processing for an Infrared Observatory (2005)
Synthetic test patterns and compression artefact distortion metrics for image codecs
This thesis presents a framework of test methodology to assess spatial domain compression artefacts produced by image and intra-frame coded video codecs. Few researchers have studied this broad range of artefacts. A taxonomy of image and video compression artefacts is proposed. This is based on the point of origin of the artefact in the image communication model. This thesis presents objective evaluation of distortions known as artefacts due to image and intra-frame coded video compression made using synthetic test patterns. The American National Standard Institute document ANSI T1 801 qualitatively defines blockiness, blur and ringing artefacts. These definitions have been augmented with quantitative definitions in conjunction with test patterns proposed. A test and measurement environment is proposed in which the codec under test is exercised using a portfolio of test patterns. The test patterns are designed to highlight the artefact ...
Punchihewa, Amal — Massey University, New Zealand
Medical and holographic imaging modalities produce large datasets that require efficient compression mechanisms for storage and transmission. This PhD dissertation proposes state-of-the-art technology extensions for JPEG coding standards to improve their performance in the aforementioned application domains. Modern hospitals rely heavily on volumetric images, such as produced by CT and MRI scanners. In fact, the completely digitized medical work flow, the improved imaging scanner technologies and the importance of volumetric image data sets have led to an exponentially increasing amount of data, raising the necessity for more efficient compression techniques with support for progressive quality and resolution scalability. For this type of imagery, a volumetric extension of the JPEG 2000 standard was created, called JP3D. In addition, improvements to JP3D, being alternative wavelet filters, directional wavelets and an intra-band prediction mode, were proposed and their applicability was evaluated. Holographic imaging, ...
Bruylants, Tim — Vrije Universiteit Brussel
Integration of human color vision models into high quality image compression
Strong academic and commercial interest in image compression has resulted in a number of sophisticated compression techniques. Some of these techniques have evolved into international standards such as JPEG. However, the widespread success of JPEG has slowed the rate of innovation in such standards. Even most recent techniques, such as those proposed in the JPEG2000 standard, do not show significantly improved compression performance; rather they increase the bitstream functionality. Nevertheless, the manifold of multimedia applications demands for further improvements in compression quality. The problem of stagnating compression quality can be overcome by exploiting the limitations of the human visual system (HVS) for compression purposes. To do so, commonly used distortion metrics such as mean-square error (MSE) are replaced by an HVS-model-based quality metric. Thus, the "visual" quality is optimized. Due to the tremendous complexity of the physiological structures involved in ...
Nadenau, Marcus J. — Swiss Federal Institute of Technology
Communications for CubeSat Networks and Fractionalised Spacecraft
The use of low-cost CubeSats in the context of satellite formation flying appears favourable due to their small size, relatively low launch cost, short development cycle and utilisation of commercial off the shelf components. However, the task of managing complex formations using a large number of satellites in Earth orbit is not a trivial one, and is further exacerbated by low-power and processing constraints in CubeSats. With this in mind, a Field Programmable Gate Array (FPGA) based system has been developed to provide next generation on-board computing capability. The features and functionality provided by this on-board computer, as well as the steps taken to ensure reliability, including design processes and mitigation techniques are presented in this work and compared to state of the art technology. Coupling reliable formation flying capabilities with the possibility of producing complex patterns using spacecraft will ...
Karagiannakis, Philippos — University of Strathclyde
Complexity related aspects of image compression
Digital signal processing (DSP), and, in particular, image processing, has been studied for many years. However, only the recent advances in computing technology have made it possible to use DSP in day-to-day applications. Images are now commonly used in many applications. The increasingly ubiquitous use of images raises new challenges. Users expect the images to be transmitted in a minimum of time and to take up as little storage space as possible. These requirements call for efficient image compression algorithms. The users want this compression and decompression process to be very fast so as not to have to wait for an image to be usable. Therefore, the complexities of compression algorithms need to be studied. In this thesis the term complexity is be linked to the execution time of an algorithm. That is, the lower the complexity of an algorithm, ...
Reichel, Julien — Swiss Federal Institute of Technology
The recent announcement by the LIGO and Virgo Collaborations of the direct detection of gravitational waves started the era of gravitational wave astrophysics. Up to now there have been five confirmed detections (GW150914, GW151226, GW170104, GW170814 and GW170817). Each of the GW events detected so far, shed light on multiple aspects of gravity. The first four events were due to the coalescence of a binary black hole system. August 17th 2017 marked the beginning of the so-called Multi-Messenger astronomy: the binary neutron star merger GW170817 has been observed almost simultaneously by LIGO and Virgo interferometers and several telescopes in space and on Earth, which detected the electromagnetic counterpart of this event (first as a short gamma-ray burst, GRB 170817A, and then in the visible, infra-red and X-ray bands). These last two years of great scientific discoveries would not have been ...
Piccinni, Ornella Juliana — Sapienza University, INFN Roma1
Efficient representation, generation and compression of digital holograms
Digital holography is a discipline of science that measures or reconstructs the wavefield of light by means of interference. The wavefield encodes three-dimensional information, which has many applications, such as interferometry, microscopy, non-destructive testing and data storage. Moreover, digital holography is emerging as a display technology. Holograms can recreate the wavefield of a 3D object, thereby reproducing all depth cues for all viewpoints, unlike current stereoscopic 3D displays. At high quality, the appearance of an object on a holographic display system becomes indistinguishable from a real one. High-quality holograms need large volumes of data to be represented, approaching resolutions of billions of pixels. For holographic videos, the data rates needed for transmitting and encoding of the raw holograms quickly become unfeasible with currently available hardware. Efficient generation and coding of holograms will be of utmost importance for future holographic displays. ...
Blinder, David — Vrije Universiteit Brussel
ROBUST WATERMARKING TECHNIQUES FOR SCALABLE CODED IMAGE AND VIDEO
In scalable image/video coding, high resolution content is encoded to the highest visual quality and the bit-streams are adapted to cater various communication channels, display devices and usage requirements. These content adaptations, which include quality, resolution and frame rate scaling may also affect the content protection data, such as, watermarks and are considered as a potential watermark attack. In this thesis, research on robust watermarking techniques for scalable coded image and video, are proposed and the improvements in robustness against various content adaptation attacks, such as, JPEG 2000 for image and Motion JPEG 2000, MC-EZBC and H.264/SVC for video, are reported. The spread spectrum domain, particularly wavelet-based image watermarking schemes often provides better robustness to compression attacks due to its multi-resolution decomposition and hence chosen for this work. A comprehensive and comparative analysis of the available wavelet-based watermarking schemes,is performed ...
Bhowmik, Deepayan — University of Sheffield
Fire Detection Algorithms Using Multimodal Signal and Image Analysis
Dynamic textures are common in natural scenes. Examples of dynamic textures in video include fire, smoke, clouds, volatile organic compound (VOC) plumes in infra-red (IR) videos, trees in the wind, sea and ocean waves, etc. Researchers extensively studied 2-D textures and related problems in the fields of image processing and computer vision. On the other hand, there is very little research on dynamic texture detection in video. In this dissertation, signal and image processing methods developed for detection of a specific set of dynamic textures are presented. Signal and image processing methods are developed for the detection of flames and smoke in open and large spaces with a range of up to $30$m to the camera in visible-range (IR) video. Smoke is semi-transparent at the early stages of fire. Edges present in image frames with smoke start loosing their sharpness ...
Toreyin, Behcet Ugur — Bilkent University
Fish-Eye Observing with Phased Array Radio Telescopes
The radio astronomical community is currently developing and building several new radio telescopes based on phased array technology. These telescopes provide a large field-of-view, that may in principle span a full hemisphere. This makes calibration and imaging very challenging tasks due to the complex source structures and direction dependent radio wave propagation effects. In this thesis, calibration and imaging methods are developed based on least squares estimation of instrument and source parameters. Monte Carlo simulations and actual observations with several prototypes show that this model based approach provides statistically and computationally efficient solutions. The error analysis provides a rigorous mathematical framework to assess the imaging performance of current and future radio telescopes in terms of the effective noise, which is the combined effect of propagated calibration errors, noise in the data and source confusion.
Wijnholds, Stefan J. — Delft University of Technology
Active and Passive Approaches for Image Authentication
The generation and manipulation of digital images is made simple by widely available digital cameras and image processing software. As a consequence, we can no longer take the authenticity of a digital image for granted. This thesis investigates the problem of protecting the trustworthiness of digital images. Image authentication aims to verify the authenticity of a digital image. General solution of image authentication is based on digital signature or watermarking. A lot of studies have been conducted for image authentication, but thus far there has been no solution that could be robust enough to transmission errors during images transmission over lossy channels. On the other hand, digital image forensics is an emerging topic for passively assessing image authenticity, which works in the absence of any digital watermark or signature. This thesis focuses on how to assess the authenticity images when ...
Ye, Shuiming — National University of Singapore
Radio Frequency Interference spatial processing for modern radio telescopes
Radio astronomy studies cosmic sources through their radio emissions. As passive users, astronomers have to deal with an increasingly corrupted radio spectrum. The research presented here focuses on man-made Radio Frequency Interference (RFI), and how astronomical observations can be performed in non-protected frequency bands. Traditional approaches consist in monitoring radio telescopes output data through statistical parameters. Once detected, the corrupted data is removed before further processing. Besides other technical advantages compared to single dish radio telescopes, antenna arrays provide spatial information about astronomical observations. The spatial diversity between cosmic sources-of-interest (CSOI) and RFI can be exploited to develop spatial RFI processing. After formulating a multidimensional radio astronomical data model, an interference subspace subtraction technique is introduced. This approach consists in subtracting RFI contributions from antenna array radio telescopes data. Orthogonal projection applied to astronomical observation vector spaces has already been ...
Hellbourg, Gregory — CNRS, ASTRON, Laboratoire PRISME
A statistical approach to motion estimation
Digital video technology has been characterized by a steady growth in the last decade. New applications like video e-mail, third generation mobile phone video communications, videoconferencing, video streaming on the web continuously push for further evolution of research in digital video coding. In order to be sent over the internet or even wireless networks, video information clearly needs compression to meet bandwidth requirements. Compression is mainly realized by exploiting the redundancy present in the data. A sequence of images contains an intrinsic, intuitive and simple idea of redundancy: two successive images are very similar. This simple concept is called temporal redundancy. The research of a proper scheme to exploit the temporal redundancy completely changes the scenario between compression of still pictures and sequence of images. It also represents the key for very high performances in image sequence coding when compared ...
Moschetti, Fulvio — Swiss Federal Institute of Technology
Distributed Adaptive Spatial Filtering in Resource-constrained Sensor Networks
Wireless sensor networks consist in a collection of battery-powered sensors able to gather, process and send data. They are typically used to monitor various phenomenons, in a plethora of fields, from environmental studies to smart logistics. Their wireless connectivity and relatively small size allow them to be deployed practically anywhere, even underwater or embedded in everyday clothing, and possibly capture data over a large area for extended periods of time. Their usefulness is therefore tied to their ability to work autonomously, with as little human intervention as possible. This functional requirement directly translates into two design constraints: (i) bandwidth and on-board compute must be used sparingly, in order to extend battery-life as much as possible, and (ii) the system must be resilient to node failures and changing environment. Due to their limited computing capabilities, data processing is usually performed by ...
Hovine, Charles — KU Leuven
Low Complexity Image Recognition Algorithms for Handheld Devices
Content Based Image Retrieval (CBIR) has gained a lot of interest over the last two decades. The need to search and retrieve images from databases, based on information (“features”) extracted from the image itself, is becoming increasingly important. CBIR can be useful for handheld image recognition devices in which the image to be recognized is acquired with a camera, and thus there is no additional metadata associated to it. However, most CBIR systems require large computations, preventing their use in handheld devices. In this PhD work, we have developed low-complexity algorithms for content based image retrieval in handheld devices for camera acquired images. Two novel algorithms, ‘Color Density Circular Crop’ (CDCC) and ‘DCT-Phase Match’ (DCTPM), to perform image retrieval along with a two-stage image retrieval algorithm that combines CDCC and DCTPM, to achieve the low complexity required in handheld devices ...
Ayyalasomayajula, Pradyumna — EPFL
The current layout is optimized for mobile phones. Page previews, thumbnails, and full abstracts will remain hidden until the browser window grows in width.
The current layout is optimized for tablet devices. Page previews and some thumbnails will remain hidden until the browser window grows in width.