Venue: Duke Marine Laboratory, Library space (see map below).

August 14

8:50-9am Opening remark

9am -12noon Lecture series 1

Rongjie Lai (RPI) “Computational Mean-field Games: from conventional numerical methods to deep generative models”

12noon-2pm Lunch break

2pm-5pm Lecture series 2

Yuan Gao (Purdue) “Stochastic optimal control, large deviation and transition paths on continuous/discrete states”

6pm Reception

August 15

9am -12noon Lecture series 3

Leonardo Zepeda Nunez (Google Research and University of Wisconsin) “Reduced-order modeling with machine learning: from linear projections to hypernetworks”

12noon-2pm Lunch break

2pm-5pm Lecture series 4

Jonathan Siegel (Texas A&M) “Theory of ReLU Neural Networks: Representation, Interpolation, and Approximation”

August 16

9am -12noon Lecture series 5

Di Fang (Berkeley) “Introduction of quantum algorithms for scientific computation”

1pm-3pm excursion

Conclusion of Summer School

4pm shuttle service to Duke campus.

 

 


Abstracts:

Di Fang (Berkeley) “Introduction of quantum algorithms for scientific computation”

In this lecture, we will start by introducing the fundamental concepts of quantum computing, such as its basic principles and circuit construction. We will then delve into the topic of Hamiltonian simulation and explore various quantum algorithms related to it. Finally, time permitting, we will address the challenges and constraints associated with extending quantum algorithms to general differential equations that involve non-unitary dynamics.

 

Yuan Gao (Purdue) “Stochastic optimal control, large deviation and transition paths on continuous/discrete states”

This lecture focuses on the Hamilton-Jacobi method for study (stochastic) dynamic systems arising from optimal control problems. We will first introduce general formulations for the stochastic optimal control problem, which is widely used in the large deviation estimates for rare events and mean-field games. Then we derive the Hamilton-Jacobi equations on continuous/discrete states that naturally arise in these problems. The solutions to certain Hamilton-Jacobi equations and their variational representation will then be used to compute the rate function in the large deviation principle, to estimate the energy barrier for transition paths and to construct the global energy landscape. The generalized gradient flow structure connected with the large deviation rate function and their non-gradient tilting will also be discussed with various examples.

Jonathan Siegel (Texas A&M) “Theory of ReLU Neural Networks: Representation, Interpolation, and Approximation”

This lecture focuses on the following three related questions. First, what is the class of functions that can be exactly represented using neural networks? Second, given a collection of points and values, how efficiently (in terms of the number of parameters) can these values be interpolated at the given points using neural networks? Finally, how efficiently can a given target function be approximated using neural networks? We will focus mainly on the ReLU and ReLUk activation functions and consider both shallow and deep neural networks. Our goal is to provide an introduction to the results and methods used to address each of the aforementioned questions.

Rongjie Lai (RPI) “Computational Mean-field Games: from conventional numerical methods to deep generative models”

 

Leonardo Zepeda Nunez (Google Research and University of Wisconsin) “Reduced-order modeling with machine learning: from linear projections to hypernetworks”