Princeton University

School of Engineering & Applied Science

Optimal Learning for Nonlinear Parametric Belief Models

Speaker: 
Xinyu He
Location: 
Engineering Quadrangle B327
Date/Time: 
Wednesday, May 10, 2017 - 1:00pm to 2:30pm

Abstract
Many real-world optimization problems require making measurements to determine which choice works the best. Such measurements can be noisy and expensive, as might arise in simulations or laboratory experiments. Optimal learning addresses the problem of how to collect information efficiently. In particular, a class of Bayesian policies, known as the Knowledge Gradient (KG), has drawn extensive attention. KG is easy to implement for low-dimensional problems with simple belief models such as look-up table or linear models, but becomes computationally intractable with nonlinear parametric belief models, which arise frequently in multidimensional problems.
 
We study the optimal learning problem with nonlinear parametric belief models. We assume the function to optimize, which could be some performance, utility or output, can be globally or locally described by some parametric model, where the parameters are unknown. Our goal is to identify the optimal choice (where a choice is also called an experimental design or an alternative) after running a limited number of sequential measurements, which are noisy and expensive.
 
We first focus on problems with a global parametric form. We use a sampled set of parameters to solve the computational issue of KG, and introduce a resampling process to update the sampled set adaptively. We also develop an efficient implementation of the KG policy for continuous alternatives. We prove that our algorithm, which works for multidimensional and continuous alternative and parameter spaces, can asymptotically learn both the correct parameter and the optimal alternative. Next we study problems in which a globally accurate model is unavailable, but there exist some parametric models that can well describe the function in local regions. We propose a method to construct global approximations of the function and apply KG to quickly identify the optimal alternative. Experiments on both synthetic and real-world problems show the strong performance of our algorithm.