- Ph.D., Computational and Mathematical Engineering, Stanford University, 2015
- B.Sc., Mathematics, Duke University, 2010
Jason Lee received his Ph.D. at Stanford University, advised by Trevor Hastie and Jonathan Taylor, in 2015. Before joining Princeton, he was a postdoctoral scholar at UC Berkeley with Michael I. Jordan. His research interests are in machine learning, optimization, and statistics. Lately, he has worked on the foundations of deep learning, non-convex optimization, and reinforcement learning.
Gradient Descent Finds Global Minima of Deep Neural Networks. Simon S. Du, Jason D. Lee, Haochuan Li, Liwei Wang, and Xiyu Zhai. ICML 2019.
Gradient Descent Converges to Minimizers. Jason D. Lee, Max Simchowitz, Michael I. Jordan, and Benjamin Recht. Conference on Learning Theory (COLT 2016).
Matrix Completion has No Spurious Local Minimum. Rong Ge, Jason D. Lee, and Tengyu Ma. Neural Information Processing Systems (NIPS 2016)
Theoretical insights into the optimization landscape of over-parameterized shallow neural networks. Mahdi Soltanolkotabi, Adel Javanmard, and Jason D. Lee. IEEE Transactions on Information Theory 2018.
Regularization Matters: Generalization and Optimization of Neural Nets v.s. their Induced Kernel. Colin Wei, Jason D. Lee, Qiang Liu, and Tengyu Ma.
Honors and Awards:
- Sloan Research Fellowship in Computer Science
- NIPS Best Student Paper Award