Princeton University

School of Engineering & Applied Science

Multi-view Representation Learning with Applications to Functional Neuroimaging Data

Po-Hsuan Cameron Chen
Engineering Quadrangle B327
Tuesday, June 20, 2017 - 10:30am to 12:00pm

One of the greatest challenges for the 21st century is understanding how the human brain works. Although there are different levels of understanding of the human brain, a key step is knowing how brain activity patterns map onto cognition, emotion, memories, etc. This can be studied using functional magnetic resonance imaging (fMRI). fMRI is a non-invasive brain imaging technique with unprecedented spatiotemporal resolution. The fMRI data is gathered while subjects perform a wide-range of cognitive tasks. Analysis of fMRI data using multivariate statistics and machine learning has led to tremendous success in understanding how patterns of neural activity reflect mental representations. This thesis aims to continue the success through advancing machine learning methods motivated by applications to neuroscience problems.
We develop a multi-view learning framework that estimates shared features from multi-view data. We analyze and demonstrate two primary approaches of how can a multi-view learning framework provide new ways of exploring neuroimaging data. First, a multi-view learning model forms a larger dataset by aggregating data from multiple views. A key potential advantage of this is an increase in statistical sensitivity. Second, a multi-view learning model learns a shared feature space and transformations between each view’s observation space and the shared feature space. These transformations bridge any two views, opening up new possibilities for analyzing the data. For example, by treating a subject as a view, we can transform one subject’s fMRI data into the space of another subject’s brain. By treating semantic vectors of stimulus text description and fMRI response as different views, it opens up the opportunity to generate text from fMRI responses or fMRI responses from text.
Lastly, we explore various forms of multi-view learning models, including manifold learning, probabilistic modeling, deep neural network, etc. Different ways of applying multi-view models on neuroimaging data are demonstrated and analyzed. We also discuss our contribution to the open-source software community.