Dynamic Scheme Selection in Image Coding (1999)
On Bayesian Methods for Black-Box Optimization: Efficiency, Adaptation and Reliability
Recent advances in many fields ranging from engineering to natural science, require increasingly complicated optimization tasks in the experiment design, for which the target objectives are generally in the form of black-box functions that are expensive to evaluate. In a common formulation of this problem, a designer is expected to solve the black-box optimization tasks via sequentially attempting candidate solutions and receiving feedback from the system. This thesis considers Bayesian optimization (BO) as the black-box optimization framework, and investigates the enhancements on BO from the aspects of efficiency, adaptation and reliability. Generally, BO consists of a surrogate model for providing probabilistic inference and an acquisition function which leverages the probabilistic inference for selecting the next candidate solution. Gaussian process (GP) is a prominent non-parametric surrogate model, and the quality of its inference is a critical factor on the optimality performance ...
Zhang, Yunchuan — King's College London
Modeling Perceived Quality for Imaging Applications
People of all generations are making more and more use of digital imaging systems in their daily lives. The image content rendered by these digital imaging systems largely differs in perceived quality depending on the system and its applications. To be able to optimize the experience of viewers of this content understanding and modeling perceived image quality is essential. Research on modeling image quality in a full-reference framework --- where the original content can be used as a reference --- is well established in literature. In many current applications, however, the perceived image quality needs to be modeled in a no-reference framework at real-time. As a consequence, the model needs to quantitatively predict perceived quality of a degraded image without being able to compare it to its original version, and has to achieve this with limited computational complexity in order ...
Liu, Hantao — Delft University of Technology
Vision models and quality metrics for image processing applications
Optimizing the performance of digital imaging systems with respect to the capture, display, storage and transmission of visual information represents one of the biggest challenges in the field of image and video processing. Taking into account the way humans perceive visual information can be greatly beneficial for this task. To achieve this, it is necessary to understand and model the human visual system, which is also the principal goal of this thesis. Computational models for different aspects of the visual system are developed, which can be used in a wide variety of image and video processing applications. The proposed models and metrics are shown to be consistent with human perception. The focus of this work is visual quality assessment. A perceptual distortion metric (PDM) for the evaluation of video quality is presented. It is based on a model of the ...
Winkler, Stefan — Swiss Federal Institute of Technology
A statistical approach to motion estimation
Digital video technology has been characterized by a steady growth in the last decade. New applications like video e-mail, third generation mobile phone video communications, videoconferencing, video streaming on the web continuously push for further evolution of research in digital video coding. In order to be sent over the internet or even wireless networks, video information clearly needs compression to meet bandwidth requirements. Compression is mainly realized by exploiting the redundancy present in the data. A sequence of images contains an intrinsic, intuitive and simple idea of redundancy: two successive images are very similar. This simple concept is called temporal redundancy. The research of a proper scheme to exploit the temporal redundancy completely changes the scenario between compression of still pictures and sequence of images. It also represents the key for very high performances in image sequence coding when compared ...
Moschetti, Fulvio — Swiss Federal Institute of Technology
Traditional and Scalable Coding Techniques for Video Compression
In recent years, the usage of digital video has steadily been increasing. Since the amount of data for uncompressed digital video representation is very high, lossy source coding techniques are usually employed in digital video systems to compress that information and make it more suitable for storage and transmission. The source coding algorithms for video compression can be grouped into two big classes: the traditional and the scalable techniques. The goal of the traditional video coders is to maximize the compression efficiency corresponding to a given amount of compressed data. The goal of scalable video coding is instead to give a scalable representation of the source, such that subsets of it are able to describe in an optimal way the same video source but with reduced resolution in the temporal, spatial and/or quality domain. This thesis is focused on the ...
Cappellari, Lorenzo — University of Padova
PRIORITIZED 3D SCENE RECONSTRUCTION AND RATE-DISTORTION
In this dissertation, a novel scheme performing 3D reconstruction of a scene from a 2D video sequence is presented. To this aim, first, the trajectories of the salient features in the scene are determined as a sequence of displacements via Kanade-Lukas-Tomasi tracker and Kalman filter. Then, a tentative camera trajectory with respect to a metric reference reconstruction is estimated. All frame pairs are ordered with respect to their amenability to 3D reconstruction by a metric that utilizes the baseline distances and the number of tracked correspondences between the frames. The ordered frame pairs are processed via a sequential structure-from- motion algorithm to estimate the sparse structure and camera matrices. The metric and the associated reconstruction algorithm are shown to outperform their counterparts in the literature via experiments. Finally, a mesh-based, rate- distortion efficient representation is constructed through a novel procedure ...
Imre, Evren — Middle East Technical University, Department of Electrical and Electronics Engineering
Integration of human color vision models into high quality image compression
Strong academic and commercial interest in image compression has resulted in a number of sophisticated compression techniques. Some of these techniques have evolved into international standards such as JPEG. However, the widespread success of JPEG has slowed the rate of innovation in such standards. Even most recent techniques, such as those proposed in the JPEG2000 standard, do not show significantly improved compression performance; rather they increase the bitstream functionality. Nevertheless, the manifold of multimedia applications demands for further improvements in compression quality. The problem of stagnating compression quality can be overcome by exploiting the limitations of the human visual system (HVS) for compression purposes. To do so, commonly used distortion metrics such as mean-square error (MSE) are replaced by an HVS-model-based quality metric. Thus, the "visual" quality is optimized. Due to the tremendous complexity of the physiological structures involved in ...
Nadenau, Marcus J. — Swiss Federal Institute of Technology
Error Resilience and Concealment Techniques for High Efficiency Video Coding
This thesis investigates the problem of robust coding and error concealment in High Efficiency Video Coding (HEVC). After a review of the current state of the art, a simulation study about error robustness, revealed that the HEVC has weak protection against network losses with significant impact on video quality degradation. Based on this evidence, the first contribution of this work is a new method to reduce the temporal dependencies between motion vectors, by improving the decoded video quality without compromising the compression efficiency. The second contribution of this thesis is a two-stage approach for reducing the mismatch of temporal predictions in case of video streams received with errors or lost data. At the encoding stage, the reference pictures are dynamically distributed based on a constrained Lagrangian rate-distortion optimization to reduce the number of predictions from a single reference. At the ...
João Filipe Monteiro Carreira — Loughborough University London
Advances in Perceptual Stereo Audio Coding Using Linear Prediction Techniques
A wide range of techniques for coding a single-channel speech and audio signal has been developed over the last few decades. In addition to pure redundancy reduction, sophisticated source and receiver models have been considered for reducing the bit-rate. Traditionally, speech and audio coders are based on different principles and thus each of them offers certain advantages. With the advent of high capacity channels, networks, and storage systems, the bit-rate versus quality compromise will no longer be the major issue; instead, attributes like low-delay, scalability, computational complexity, and error concealments in packet-oriented networks are expected to be the major selling factors. Typical audio coders such as MP3 and AAC are based on subband or transform coding techniques that are not easily reconcilable with a low-delay requirement. The reasons for their inherently longer delay are the relatively long band splitting filters ...
Biswas, Arijit — Technische Universiteit Eindhoven
Radial Basis Function Network Robust Learning Algorithms in Computer Vision Applications
This thesis introduces new learning algorithms for Radial Basis Function (RBF) networks. RBF networks is a feed-forward two-layer neural network used for functional approximation or pattern classification applications. The proposed training algorithms are based on robust statistics. Their theoretical performance has been assessed and compared with that of classical algorithms for training RBF networks. The applications of RBF networks described in this thesis consist of simultaneously modeling moving object segmentation and optical flow estimation in image sequences and 3-D image modeling and segmentation. A Bayesian classifier model is used for the representation of the image sequence and 3-D images. This employs an energy based description of the probability functions involved. The energy functions are represented by RBF networks whose inputs are various features drawn from the images and whose outputs are objects. The hidden units embed kernel functions. Each kernel ...
Bors, Adrian G. — Aristotle University of Thessaloniki
Robust and multiresolution video delivery : From H.26x to Matching pursuit based technologies
With the joint development of networking and digital coding technologies multimedia and more particularly video services are clearly becoming one of the major consumers of the new information networks. The rapid growth of the Internet and computer industry however results in a very heterogeneous infrastructure commonly overloaded. Video service providers have nevertheless to oer to their clients the best possible quality according to their respective capabilities and communication channel status. The Quality of Service is not only inuenced by the compression artifacts, but also by unavoidable packet losses. Hence, the packet video stream has clearly to fulll possibly contradictory requirements, that are coding eciency and robustness to data loss. The rst contribution of this thesis is the complete modeling of the video Quality of Service (QoS) in standard and more particularly MPEG-2 applications. The performance of Forward Error Control (FEC) ...
Frossard, Pascal — Swiss Federal Institute of Technology
ROBUST WATERMARKING TECHNIQUES FOR SCALABLE CODED IMAGE AND VIDEO
In scalable image/video coding, high resolution content is encoded to the highest visual quality and the bit-streams are adapted to cater various communication channels, display devices and usage requirements. These content adaptations, which include quality, resolution and frame rate scaling may also affect the content protection data, such as, watermarks and are considered as a potential watermark attack. In this thesis, research on robust watermarking techniques for scalable coded image and video, are proposed and the improvements in robustness against various content adaptation attacks, such as, JPEG 2000 for image and Motion JPEG 2000, MC-EZBC and H.264/SVC for video, are reported. The spread spectrum domain, particularly wavelet-based image watermarking schemes often provides better robustness to compression attacks due to its multi-resolution decomposition and hence chosen for this work. A comprehensive and comparative analysis of the available wavelet-based watermarking schemes,is performed ...
Bhowmik, Deepayan — University of Sheffield
A Multimodal Approach to Audiovisual Text-to-Speech Synthesis
Speech, consisting of an auditory and a visual signal, has always been the most important means of communication between humans. It is well known that an optimal conveyance of the message requires that both the auditory and the visual speech signal can be perceived by the receiver. Nowadays people interact countless times with computer systems in every-day situations. Since the ultimate goal is to make this interaction feel completely natural and familiar, the most optimal way to interact with a computer system is by means of speech. Similar to the speech communication between humans, the most appropriate human-machine interaction consists of audiovisual speech signals. In order to allow the computer system to transfer a spoken message towards its users, an audiovisual speech synthesizer is needed to generate novel audiovisual speech signals based on a given text. This dissertation focuses on ...
Mattheyses, Wesley — Vrije Universiteit Brussel
Advanced time-domain methods for nuclear magnetic resonance spectroscopy data analysis
Over the past years magnetic resonance spectroscopy (MRS) has been of significant importance both as a fundamental research technique in different fields, as well as a diagnostic tool in medical environments. With MRS, for example, spectroscopic information, such as the concentrations of chemical substances, can be determined non-invasively. To that end, the signals are first modeled by an appropriate model function and mathematical techniques are subsequently applied to determine the model parameters. In this thesis, signal processing algorithms are developed to quantify in-vivo and ex-vivo MRS signals. These are usually characterized by a poor signal-to-noise ratio, overlapping peaks, deviations from the model function and in some cases the presence of disturbing components (e.g. the residual water in proton spectra). The work presented in this thesis addresses a part of the total effort to provide accurate, efficient and automatic data analysis ...
Vanhamme, Leentje — Katholieke Universiteit Leuven
MPEGII Video Coding For Noisy Channels
This thesis considers the performance of MPEG-II compressed video when transmitted over noisy channels, a subject of relevance to digital terrestrial television, video communication and mobile digital video. Results of bit sensitivity and resynchronisation sensitivity measurements are presented and techniques proposed for substantially improving the resilience of MPEG-II to transmission errors without the addition of any extra redundancy into the bitstream. It is errors in variable length encoded data which are found to cause the greatest artifacts as errors in these data can cause loss of bitstream synchronisation. The concept of a ‘black box transcoder’ is developed where MPEG-II is losslessly transcoded into a different structure for transmission. Bitstream resynchronisation is achieved using a technique known as error-resilient entropy coding (EREC). The error-resilience of differentially coded information is then improved by replacing the standard 1D-DPCM with a more resilient hierarchical ...
Swan, Robert — University of Cambridge
The current layout is optimized for mobile phones. Page previews, thumbnails, and full abstracts will remain hidden until the browser window grows in width.
The current layout is optimized for tablet devices. Page previews and some thumbnails will remain hidden until the browser window grows in width.