Princeton University

School of Engineering & Applied Science

Learning with Sparsity and Scattering Networks

Speaker: 
Mia Xu Chen
Location: 
Engineering Quadrangle B327
Date/Time: 
Wednesday, September 30, 2015 - 12:30pm to 2:00pm

Abstract
The past decades have witnessed dramatic advancement in artificial intelligence. Machine learning technology now powers many aspects of modern life. Although stunning progress has been made in improving learning algorithms, the challenges remain. New learning algorithms and architectures need to be developed to take advantage of the increase in the amount of available computing resources and data.
 
This dissertation focuses on two of the most important concepts in machine learning, sparsity and depth. Sparsity leads to efficient and compact representations. It is a key element in domains including statistics, signal processing and machine learning. Depth allows discovering intricate structures from data and learning representations with multiple levels of abstraction. Deep learning has recently brought about major breakthroughs in various application fields. In this dissertation, we design and analyze representation learning algorithms and classification approaches that combine the two concepts. We first show that sparsity is a powerful tool in supervised classification, and that the discriminative power of sparsity combined with the invariance built with a deep architecture can further improve classification performance. Then we propose an unsupervised deep learning model with sparsity criteria for the classification of high-dimensional unstructured data. In both cases, state-of-the-art results are obtained. From the variants of architectures that are presented, we will have a better understanding of the role that sparsity and depth play in learning algorithms.