Semester Theme: Inverse Problems and Imaging.
Special Guest Lectures
Week of 3/28: Justin Solomon (MIT) [reschedule to Fall 2022]
Week of 4/11: Haomin Zhou (Georgia Tech)
Working seminar
The working seminar meets Fridays 1:30-3:00pm, in Physics 119.
Week 1, 1/21.
Kick-off meeting
Week 2, 1/28.
Yimin Zhong: “some open questions in inverse problems”.
Xiuyuan Cheng: “challenges in differential analysis of single-cell data”
Week 3, 2/4.
Hongkai Zhao: “Point cloud data for shapes and beyond”
Week 4, 2/11.
Ziyu Chen: “Some problems in time-frequency analysis”
Week 5, 2/18.
Shira Golovin: “High dimensional data similarities and discrepancies”.
Nan Wu: “Gaussian Process on manifold and related problems”
Week 6, 2/25.
Hau-Tieng Wu: “single channel blind source separation with optimal shrinkage and manifold denoising”
Tao Tang: “intractable likelihood and simulation based inference”
Week 7: 3/4.
Yixuan Tan: “graph based methods in manifold metric learning”
Discussion of topics of literature reading group.
Week 8: 3/11.
(spring break)
Week 9: 3/18.
Reading of papers on “Empirical process and spectral methods”. Presenter: Tao Tang, Yixuan Tan.
Week 10: 3/25.
Continue paper reading on “Empirical process and spectral methods”.
Week 11: 4/1.
Reading of paper “Fit without fear: remarkable mathematical phenomena of deep learning through the prism of interpolation” by Mikhail Belkin. Presenter: Ziyu Chen.
Week 12: 4/8.
Guest student speaker: Inbar Seroussi (Weizmann)
Title: How Well Can We Generalize Nonlinear Models in High Dimensions?
Abstract: Modern learning algorithms such as deep neural networks operate in regimes that defy the traditional statistical learning theory. Neural networks architectures often contain more parameters than training samples. Despite their huge complexity, the generalization error achieved on real data is small. In this talk, we aim to study the generalization properties of algorithms in high dimensions. We first show that algorithms in high dimensions require a small bias for good generalization. We show that this is indeed the case for deep neural networks in the over-parametrized regime. We, then, provide lower bounds on the generalization error in various settings for any algorithm. We calculate such bounds using random matrix theory (RMT). We will review the connection between deep neural networks and RMT and existing results. These bounds are particularly useful when the analytic evaluation of standard performance bounds is not possible due to the complexity and nonlinearity of the model. The bounds can serve as a benchmark for testing performance and optimizing the design of actual learning algorithms. Joint work with Ofer Zeitouni, more information in arxiv.org/abs/2103.14723.
Guest student speaker: Li Li (UCLA)
Title: The Calderón Problem for Classical and Fractional Operators
Abstract: In the context pf partial differential equations, the study of inverse problems concerns the determination of certain terms in an operator from external information on solutions of equations. In this talk, I will first introduce some uniqueness results for the classical Calderón problem, and then I will focus on fractional operators (e.g. fractional Laplacian) and their associated inverse problems.
Week 13: 4/15.
Haomin Zhou (Georgia Tech). Guest Lectures 3 days in the week.