Power/Energy Estimation and Optimization for Software-Oriented Embedded Systems

The importance of power reduction of embedded systems has continuously increased in the past years. Recently, reducing power dissipation and energy consumption of a program have become optimization goals in their own right, no longer considered a side-effect of traditional performance optimizations which mainly target program execution time and/or program size. Nowadays, there is an increasing demand for developing power-optimizing compilers for embedded systems. This thesis is a step towards such important goal. In this thesis, we develop functional-level power models and investigate several software optimization techniques for embedded-processor systems. As a specific example, we consider the powerful Texas Instruments C6416T DSP processor. We analyze the power consumption contributions of the different functional units of this DSP. We assess the effect of the compiler performance optimizations on the energy and power consumption. Moreover, we explore the impact of two special ...

Ibrahim, Mostafa — Benha High Institute of Technology, Egypt


A statistical approach to motion estimation

Digital video technology has been characterized by a steady growth in the last decade. New applications like video e-mail, third generation mobile phone video communications, videoconferencing, video streaming on the web continuously push for further evolution of research in digital video coding. In order to be sent over the internet or even wireless networks, video information clearly needs compression to meet bandwidth requirements. Compression is mainly realized by exploiting the redundancy present in the data. A sequence of images contains an intrinsic, intuitive and simple idea of redundancy: two successive images are very similar. This simple concept is called temporal redundancy. The research of a proper scheme to exploit the temporal redundancy completely changes the scenario between compression of still pictures and sequence of images. It also represents the key for very high performances in image sequence coding when compared ...

Moschetti, Fulvio — Swiss Federal Institute of Technology


Digital design and experimental validation of high-performance real-time OFDM systems

The goal of this Ph.D. dissertation is to address a number of challenges encountered in the digital baseband design of modern and future wireless communication systems. The fast and continuous evolution of wireless communications has been driven by the ambitious goal of providing ubiquitous services that could guarantee high throughput, reliability of the communication link and satisfy the increasing demand for efficient re-utilization of the heavily populated wireless spectrum. To cope with these ever-growing performance requirements, researchers around the world have introduced sophisticated broadband physical (PHY)-layer communication schemes able to accommodate higher bandwidth, which indicatively include multiple antennas at the transmitter and receiver and are capable of delivering improved spectral efficiency by applying interference management policies. The merging of Multiple Input Multiple Output (MIMO) schemes with the Orthogonal Frequency Division Multiplexing (OFDM) offers a flexible signal processing substrate to implement ...

Font-Bach, Oriol — Centre Tecnològic de Telecomunicacions de Catalunya (CTTC)


Joint Downlink Beamforming and Discrete Resource Allocation Using Mixed-Integer Programming

Multi-antenna processing is widely adopted as one of the key enabling technologies for current and future cellular networks. Particularly, multiuser downlink beamforming (also known as space-division multiple access), in which multiple users are simultaneously served with spatial transmit beams in the same time and frequency resource, achieves high spectral efficiency with reduced energy consumption. To harvest the potential of multiuser downlink beamforming in practical systems, optimal beamformer design shall be carried out jointly with network resource allocation. Due to the specifications of cellular standards and/or implementation constraints, resource allocation in practice naturally necessitates discrete decision makings, e.g., base station (BS) association, user scheduling and admission control, adaptive modulation and coding, and codebook-based beamforming (precoding). This dissertation focuses on the joint optimization of multiuser downlink beamforming and discrete resource allocation in modern cellular networks. The problems studied in this thesis involve ...

Cheng, Yong — Technische Universität Darmstadt


Sparsity-Aware Wireless Networks: Localization and Sensor Selection

Wireless networks have revolutionized nowadays world by providing real-time cost efficient service and connectivity. Even such an unprecedented level of service could not fulfill the insatiable desire of the modern world for more advanced technologies. As a result, a great deal of attention has been directed towards (mobile) wireless sensor networks (WSNs) which are comprised of considerably cheap nodes that can cooperate to perform complex tasks in a distributed fashion in extremely harsh environments. Unique features of wireless environments, added complexity owing to mobility, distributed nature of the network setup, and tight performance and energy constraints, pose a challenge for researchers to devise systems which strike a proper balance between performance and resource utilization. We study some of the fundamental challenges of wireless (sensor) networks associated with resource efficiency, scalability, and location-awareness. The pivotal point which distinguishes our studies from ...

Jamali-Rad, Hadi — TU Delft


Energy-Efficient Distributed Multicast Beamforming Using Iterative Second-Order Cone Programming

In multi-user (MU) downlink beamforming, a high spectral efficiency along with a low transmit power is achieved by separating multiple users in space rather than in time or frequency using spatially selective transmit beams. For streaming media applications, multi-group multicast (MGM) downlink beamforming is a promising approach to exploit the broadcasting property of the wireless medium to transmit the same information to a group of users. To limit inter-group interference, the individual streams intended for different multicast groups are spatially separated using MGM downlink beamforming. Spatially selective downlink beamforming requires the employment of an array of multiple antennas at the base station (BS). The hardware costs associated with the use of multiple antennas may be prohibitive in practice. A way to avoid the expensive employment of multiple antennas at the BS is to exploit user cooperation in wireless networks where ...

Bornhorst, Nils — Technische Universität Darmstadt


OFDM Air-Interface Design for Multimedia Communications

The aim of this dissertation is the investigation of the key issues encountered in the development of wideband radio air-interfaces. Orthogonal frequency-division multiplexing (OFDM) is considered as the enabling technology for transmitting data at extremely high rates over time-dispersive radio channels. OFDM is a transmission scheme, which splits up the data stream, sending the data symbols simultaneously at a drastically reduced symbol rate over a set of parallel sub-carriers. The first part of this thesis deals with the modeling of the time-dispersive and frequency-selective radio channel, utilizing second order Gaussian stochastic processes. A novel channel measurement technique is developed, in which the RMS delay spread of the channel is estimated from the level-crossing rate of the frequency-selective channel transfer function. This method enables the empirical channel characterization utilizing simplified non-coherent measurements of the received power versus frequency. Air-interface and multiple ...

Witrisal, Klaus — Delft University of Technology


The TM3270 Media-processor

In this thesis, we present the TM3270 VLIW media-processor, the latest of TriMedia processors, and describe the innovations with respect to its prede- cessor: the TM3260. We describe enhancements to the load/store unit design, such as a new data prefetching technique, and architectural enhancements, such as additions to the TriMedia Instruction Set Architecture (ISA). Examples of ISA enhancements include collapsed load operations, two-slot operations and H.264 specific CABAC decoding operations. All of the TM3270 innovations contribute to a common goal: a balanced processor design in terms of silicon area and power consumption, which enables audio and standard resolution video processing for both the connected and portable markets. To measure the speedup of the indi- vidual innovations of the TM3270 design, we evaluate processor performance on a set of complete video applications: motion estimation, MPEG2 encoding and temporal upconversion. Each of ...

van de Waerdt, Jan-Willem — Delft University of Technology


Combined Word-Length Allocation and High-Level Synthesis of Digital Signal Processing Circuits

This work is focused on the synthesis of Digital Signal Processing (DSP) circuits usingc specific hardware architectures. Due to its complexity, the design process has been subdivided into separate tasks, thus hindering the global optimization of the resulting systems. The author proposes the study of the combination of two major design tasks, Word-Length Allocation (WLA) and High-Level Synthesis (HLS), aiming at the optimization of DSP implementations using modern Field Programmable Gate Array devices (FPGAs). A multiple word-length approach (MWL) is adopted since it leads to highly optimized implementations. MWL implies the customization of the word-lengths of the signals of an algorithm. This complicates the design, since the number possible assignations between algorithm operations and hardware resources becomes very high. Moreover, this work also considers the use of heterogeneous FPGAs where there are several types of resources: configurable logic-based blocks (LUT-based) ...

Caffarena, Gabriel — Universidad Politecnica de Madrid


Signal Quantization and Approximation Algorithms for Federated Learning

Distributed signal or information processing using Internet of Things (IoT), facilitates real-time monitoring of signals, for example, environmental pollutants, health indicators, and electric energy consumption in a smart city. Despite the promising capabilities of IoTs, these distributed deployments often face the challenge of data privacy and communication rate constraints. In traditional machine learning, training data is moved to a data center, which requires massive data movement from distributed IoT devices to a third-party location, thus raising concerns over privacy and inefficient use of communication resources. Moreover, the growing network size, model size, and data volume combined lead to unusual complexity in the design of optimization algorithms beyond the compute capability of a single device. This necessitates novel system architectures to ensure stable and secure operations of such networks. Federated learning (FL) architecture, a novel distributed learning paradigm introduced by McMahan ...

A, Vijay — Indian Institute of Technology Bombay


Iterative Joint Source-Channel Coding Techniques for Single and Multiterminal Sources in Communication Networks

In a communication system it results undoubtedly of great interest to compress the information generated by the data sources to its most elementary representation, so that the amount of power necessary for reliable communications can be reduced. It is often the case that the redundancy shown by a wide variety of information sources can be modelled by taking into account the probabilistic dependance among consecutive source symbols rather than the probabilistic distribution of a single symbol. These sources are commonly referred to as single or multiterminal sources "with memory" being the memory, in this latter case, the existing temporal correlation among the consecutive symbol vectors generated by the multiterminal source. It is well known that, when the source has memory, the average amount of information per source symbol is given by the entropy rate, which is lower than its entropy ...

Del Ser, Javier — University of Navarra (TECNUN)


Heuristic Optimization Methods for System Partitioning in HW/SW Co-Design

Nowadays, the design of embedded systems is confronted with the combination of complex signal processing algorithms on the one hand and a variety of computational intensive multimedia applications on the other hand, while time to product launch has been extremely reduced. Especially in the wireless domain those challenges are stacked with tough requirements on power consumption and chip size. Unfortunately, design productivity did not undergo a similar progression and therefore fails to cope with the heterogeneity of modern hardware architectures. Until now, electronic design automation do not provide for complete coverage of the design ow. In particular crucial design tasks as high level characterisation of algorithms, oating-point to xed-point conversion, automated hardware/software partitioning, and automated virtual prototyping are not suciently supported or completely absent. In recent years a consistent design framework named Open Tool Integration Environment (OTIE) has been established ...

Knerr, Bastian — Vienna University of Technology


Distributed Demand-Side Optimization in the Smart Grid

The modern power grid is facing major challenges in the transition to a low-carbon energy sector. The growing energy demand and environmental concerns require carefully revisiting how electricity is generated, transmitted, and consumed, with an eye to the integration of renewable energy sources. The envisioned smart grid is expected to address such issues by introducing advanced information, control, and communication technologies into the energy infrastructure. In this context, demand-side management (DSM) makes the end users responsible for improving the efficiency, reliability and sustainability of the power system: this opens up unprecedented possibilities for optimizing the energy usage and cost at different levels of the network. The design of DSM techniques has been extensively discussed in the literature in the last decade, although the performance of these methods has been scarcely investigated from the analytical point of view. In this thesis, ...

Atzeni, Italo — Universitat Politècnica de Catalunya


Steganoflage: A New Image Steganography Algorithm

Steganography is the science that involves communicating secret data in an appropriate multimedia carrier, e.g., image, audio and video files. It comes under the assumption that if the feature is visible, the point of attack is evident, thus the goal here is always to conceal the very existence of the embedded data. It does not replace cryptography but rather boosts the security using its obscurity features. Steganography has various useful applications. However, like any other science it can be used for ill intentions. It has been propelled to the forefront of current security techniques by the remarkable growth in computational power, the increase in security awareness, e.g., individuals, groups, agencies, government and through intellectual pursuit. Steganography’s ultimate objectives, which are undetectability, robustness, resistance to various image processing methods and compression, and capacity of the hidden data, are the main factors ...

Cheddad Abbas — University of Ulster


On-board Processing for an Infrared Observatory

During the past two decades, image compression has developed from a mostly academic Rate-Distortion (R-D) field, into a highly commercial business. Various lossless and lossy image coding techniques have been developed. This thesis represents an interdisciplinary work between the field of astronomy and digital image processing and brings new aspects into both of the fields. In fact, image compression had its beginning in an American space program for efficient data storage. The goal of this research work is to recognize and develop new methods for space observatories and software tools to incorporate compression in space astronomy standards. While the astronomers benefit from new objective processing and analysis methods and improved efficiency and quality, for technicians a new field of application and research is opened. For validation of the processing results, the case of InfraRed (IR) astronomy has been specifically analyzed. ...

Belbachir, Ahmed Nabil — Vienna University of Technology

The current layout is optimized for mobile phones. Page previews, thumbnails, and full abstracts will remain hidden until the browser window grows in width.

The current layout is optimized for tablet devices. Page previews and some thumbnails will remain hidden until the browser window grows in width.