Princeton University

School of Engineering & Applied Science

Synthesis of Efficient Neural Networks

Speaker: 
Xiaoliang Dai
Advisor: 
Prof. Jha
Location: 
Engineering Quadrangle B327
Date/Time: 
Friday, July 26, 2019 - 11:30am to 1:00pm

Abstract

Over the last decade, deep neural networks (DNNs) have led to remarkable progress in solving many intricate problems in the artificial intelligence community, such as image recognition, machine translation, and robotic control. However, searching for an appropriate DNN architecture through trial-and-error or reinforcement learning is inefficient and computationally intensive. Furthermore, due to the presence of millions of parameters and floating-point operations (FLOPs), deployment of DNNs is also challenging.


To address these problems, we first propose a novel DNN synthesis tool (NeST) that trains both weights and architectures in an automated flow. Inspired by the learning mechanism of the human brain, NeST starts DNN synthesis from a seed DNN architecture (birth point). It allows the DNN to grow connections and neurons based on gradient information (baby brain) so that the DNN can easily adapt to the problem at hand. Then, it prunes away insignificant connections and neurons based on magnitude information (adult brain) to avoid redundancy. This enables NeST to generate compact yet accurate DNNs. For example, we reduce the network parameters and FLOPs for VGG-16 on the ImageNet dataset by 33.2× and 8.9×, respectively.

For efficient DNN deployment, we propose a framework called Chameleon to adapt DNNs to fit target latency and/or energy constraints for real-world applications. At the core of our algorithm lies an accuracy predictor built atop Gaussian process with Bayesian optimization for iterative sampling. With a one-time building cost for accuracy as well as two other predictors (latency and energy), our algorithm produces state-of-the-art model architectures on different platforms under given constraints in just minutes. The generated ChamNet models achieve significant accuracy improvements (up to 8.2% top-1 accuracy gain) relative to state-of-the-art handcrafted and automatically designed architectures.