Princeton University

School of Engineering & Applied Science

Relaxing the Implementation of Embedded Sensing Systems Through Machine Learning and Statistical Optimization

Zhuo Wang
Engineering Quadrangle B327
Friday, March 17, 2017 - 10:00am to 11:30am

The increasing deployment of large numbers of sensors is giving us access to physical signals, which could be of high informational value, on an unprecedented scale. However, the signals are generally derived from complex physical processes, for which we often do not have adequate analytical/physics-based models. Fortunately, machine learning algorithms enable us to model such complex signals in a data-driven manner. The challenge, however, is that the models and algorithms can pose computational requirements beyond those of highly-resource-constrained sensing platforms. While many recent works have focused on hardware optimizations at the architecture and circuit levels for energy-efficient implementation of widely-used machine-learning kernels (e.g., digital accelerators), this thesis addresses the INVERSE problem. That is, how algorithmic tools emerging from statistical optimization and machine-learning can be exploited to ease the hardware implementation itself. Rather than just the implementations serving the algorithms, we are also interested in how the algorithms can serve the implementations.
To approach this, we first propose three opportunities enabled by machine learning and optimization. Data-Driven Hardware Resilience (DDHR) leverages machine learning algorithms not only to train to the sensor signals, but also to train to the error statistics due to hardware non-idealities. Hardware Driven Kernel Learning (HDKL) explores ways to adapt the training algorithm itself to accommodate inference functions and attributes of inference functions that are preferred from the perspective of resource-constrained implementations. The third opportunity leverages statistical optimization to substantially reduce the quantization errors of computation. These principles enable substantial hardware relaxations, which can lead to extremely efficient hardware implementations of embedded machine-learning systems. We then present a few demonstration systems mapping the principles to hardware architectures, with a strong focus on a comparator-based classification accelerator for directly performing inference on analog sensor data. Through these examples, we demonstrate either transformational new systems capabilities or orders of magnitude energy reduction compared to conventional realizations.