Venue: Duke University, Physics Building

Lectures and Lightening Presentations at room 119

Coffee break and poster session at COMMONS/LOUNGE:  101 Physics

August 17

9am -12:15 pm,  invited presentations

9:00-9:45, Anru Zhang (Duke) “Tensor Learning in 2020s: Methodology, Theory, and Applications”

9:45-10:30, Caroline Moosmueller (UNC) “Approximations and learning in the Wasserstein space”

10:30-10:45, Coffee break

10:45-11:30, Rongjie Lai (RPI) “Learning Dynamics guided by Mean-field Games”

11:30-12:15, Jonathan Siegel (Texas A&M) “Optimal Approximation Rates for Deep ReLU Neural Networks on Sobolev Spaces”

12:15noon – 2pm lunch break

2pm – 4pm Invited Presentations

2:00-2:45, Yimin Zhong  (Auburn) “Implicit boundary integral method for linearized Poisson Boltzmann equation: computation and analysis”

2:45-3:30,  Abiy Tasissa (Tufts)  “Local Sparse coding via a Delaunay triangulation”

3:30-4:00, Coffee break

4pm – 5:30pm Lightening Presentations

4:00-4:15, Baoli Hao (llinois institute of Technology) “Mixed sign interactions in the 1D swarmalator model”

4:15-4:30, Madison Ihrig (Columbia University), “Synchronous Optimal Control of Influencers Associated With Graph Spreading Dynamic”

4:30-4:45, Johannes Krotz (University of Tennessee Knoxville), “A Hybrid Monte Carlo, Discontinuous Galerkin method for linear kinetic
transport equations”

4:45-5:00, Coffee break

5:00-5:15, Sun Lee (Penn State University), “Computing Multiple Solutions of Elliptic Semi-linear Equations”

5:15-5:30, Daniel Margolis (Southern Methodist University), “Partial Differential Equation Examples solved by SUNDIALS in Fortran 2003”

August 18

9am -12:15 pm, invited presentations

9:00-9:45, Leonardo Andres Zepeda Nunez (Google Research and University of Wisconsin) “Statistical Downscaling via Optimal Transport and Conditional Diffusion Models”

9:45-10:30, Mohammad Farazmand (NCSU) “Shape-morphing neural networks for solving PDEs with conserved quantities”

10:30-10:45, Coffee break

10:45-11:30, Yuan Gao (Purdue) “Thermodynamic limit, and global energy landscape for non-equilibrium chemical reactions”

11:30-12:15, Di Fang (Berkeley) “Quantum algorithms for Hamiltonian simulation with unbounded operators”Conclusion of the workshop

 


Abstracts:

Di Fang (Berkeley) “Quantum algorithms for Hamiltonian simulation with unbounded operators”

Abstract: Recent years have witnessed tremendous progress in developing and analyzing quantum computing algorithms for quantum dynamics simulation of bounded operators (Hamiltonian simulation). However, many scientific and engineering problems require the efficient treatment of unbounded operators, which frequently arise due to the discretization of differential operators. Such applications include molecular dynamics, electronic structure theory, quantum control and quantum machine learning. We will introduce some recent advances in quantum algorithms for efficient unbounded Hamiltonian simulation, including Trotter type splitting and the quantum highly oscillatory protocol (qHOP) and quantum integral formulation in the interaction picture. The latter yields a surprising superconvergence result for regular potentials.

 

Mohammad Farazmand (NCSU) “Shape-morphing neural networks for solving PDEs with conserved quantities”

Abstract: I will introduce shape-morphing modes for efficient and scalable approximation of solutions to time-dependent PDEs. Spectral methods typically assume the solution of a PDE as the linear combination of static modes, such as Fourier modes. This is quite inefficient for PDEs whose solutions are localized and/or dominated by advection. In contrast, in our framework, the modes depend nonlinearly on time-varying parameters, thus allowing the modes to change shape and adapt to the solution of the PDE over time. I will show that the shape parameters can be evolved optimally by solving a system of ODEs. I will also discuss the interpretation of this idea as a neural network whose weights and biases are time-dependent. In contrast to conventional neural nets, no training is required to determine the network parameters; instead, they are evolved by solving a known system of ODEs. Finally, I’ll show that, in our framework, one can easily ensure that the approximate solution preserves the conserved quantities of the PDE.

 

Yuan Gao (Purdue) “Thermodynamic limit, and global energy landscape for non-equilibrium chemical reactions”

Abstract: Chemical reactions can be modeled by a random time-changed Poisson process on countable states. The macroscopic behaviors such as the mean-field equation and the large deviation rate function can be studied via the. WKB reformulation. The reaction rate equation is an ODE corresponding to the zero-cost trajectory(LLN-type path) in the Hamilton dynamics. Moreover, the LDP at finite t gives the rate of concentration on the zero-cost trajectory, while the LDP for invariant measures gives the energy landscape of a non-equilibrium reaction. The later is also proved to be a selected unique weak KAM solution to the corresponding stationary HJE. The LDP rate function also motivates a relative entropy type running cost in the stochastic optimal control formulation for the transition path theory. This formulation is used to compute the transition path and energy barrier for both non-equilibrium chemical reactions and drift-diffusion process.

Rongjie Lai (RPI) “Learning Dynamics guided by Mean-field Games”
Abstract: Mean field game (MFG) problems analyze the strategic movements of a large number of similar rational agents seeking to minimize their costs. However, in many practical applications, the cost function of MFGs may not be available, rendering the associated agent dynamics unavailable. In this talk, I will discuss our recent work on learning dynamics guided by MFGs. We begin by studying a low-dimensional setting using conventional discretization methods. We propose a bilevel optimization formulation for learning dynamics guided by MFGs with unknown obstacles and metrics. We also establish local unique identifiability results and design an alternating gradient algorithm with convergence analysis. Furthermore, we extend our proposed bi-level method to a deep learning-based algorithm by bridging the trajectory representation of MFG with a special type of deep generative model known as normalizing flows. Our numerical experiments demonstrate the efficacy of the proposed methods.

 

Caroline Moosmueller (UNC) “Approximations and learning in the Wasserstein space”

Abstract: Detecting differences and building classifiers between distributions, given only finite samples, are important tasks in a number of scientific fields. Optimal transport and the Wasserstein distance have evolved as the most natural concept to deal with such tasks but have some computational drawbacks.
In this talk, we describe an approximation framework through local linearizations that significantly reduces both the computational effort and the required training data in supervised learning settings. We also introduce LOT Wassmap, a computationally feasibly algorithm to uncover low-dimensional structures in the Wasserstein space. We provide guarantees on the embedding quality, including when explicit descriptions of the probability measures are not available, and one must deal with finite samples instead. The proposed algorithms are demonstrated in pattern recognition tasks in imaging and medical applications.

 

Jonathan Siegel (Texas A&M) “Optimal Approximation Rates for Deep ReLU Neural Networks on Sobolev Spaces”

Abstract: Deep ReLU neural networks are among the most widely used class of neural networks in practical applications. We consider the problem of determining optimal $L_p$-approximation rates for deep ReLU neural networks on the Sobolev class $W^s(L_q)$ for all $1\leq p,q\leq \infty$ and $s > 0$. Existing sharp results are only available when $p=q=\infty$. In our work, we extend both the upper and lower bounds and determine the best possible rates for all $p,q$ and $s$ for which a compact Sobolev embedding holds, i.e. when $s/d > 1/q – 1/p$. This settles in particular the classical non-linear regime where $p > q$. We will discuss some of the technical details of the proof and conclude by giving a few open research directions.

 

Abiy Tasissa (Tufts)  “Local Sparse coding via a Delaunay triangulation”

Abstract: Sparse coding is a technique of representing data as a sparse linear combination of a set of vectors. This representation facilitates computation and analysis of high-dimensional data that is prevalent in many applications. We study sparse coding in the setting where the set of vectors define a unique Delaunay triangulation. We propose a weighted l1 regularizer and show that it provably yields a sparse solution. Further, we show stability of sparse codes depends on local distances which can be suitably estimated using the Cayley-Menger determinant. We make connections to dictionary learning, manifold learning and computational geometry. We discuss an optimization algorithm to learn the sparse codes and optimal set of vectors given a set of data points. Finally, we show numerical experiments to show that the resulting sparse representations give competitive performance in the problem of clustering.

Leonardo Andres Zepeda Nunez (Google Research and University of Wisconsin) “Statistical Downscaling via Optimal Transport and Conditional Diffusion Models”
Abstract: We introduce a two-stage probabilistic framework for statistical downscaling between unpaired data. Statistical downscaling seeks a map to transform low-resolution data from a (possibly biased) coarse-grained numerical scheme to high-resolution data that is consistent with a high-fidelity one. Our framework tackles the problem by composing two transformations: a debiasing step that is performed by an optimal transport map, and an upsampling step that is achieved by a probabilistic diffusion model with a posteriori conditional sampling. This approach characterizes a conditional distribution without the need for paired data, and faithfully recovers relevant physical statistics from biased samples.
We will demonstrate the utility of the proposed approach on one- and two-dimensional fluid flow problems, which are representative of the core difficulties present in numerical simulations of weather and climate. We will show that our method produces realistic high-resolution outputs from low-resolution inputs, by upsampling resolutions of 8x and 16x, while correctly matching the statistics of physical quantities, even when the low-frequency content of the inputs and outputs do not match, a crucial but difficult-to-satisfy assumption needed by current state-of-the-art alternatives.
Anru Zhang (Duke) “Tensor Learning in 2020s: Methodology, Theory, and Applications”
Abstract: The analysis of tensor data, i.e., arrays with multiple directions, has become an active research topic in the era of big data. Datasets in the form of tensors arise from a wide range of scientific applications. Tensor methods also provide unique perspectives to many high-dimensional problems, where the observations are not necessarily tensors. Problems in high-dimensional tensors generally possess distinct characteristics that pose great challenges to the data science community.

In this talk, we discuss several recent advances in tensor learning and their applications in genomics and computational imaging. We also illustrate how we develop statistically optimal methods and computationally efficient algorithms that interact with the modern theories of computation, high-dimensional statistics, and non-convex optimization.

Yimin Zhong  (Auburn) “Implicit boundary integral method for linearized Poisson Boltzmann equation: computation and analysis”
Abstract: In this talk, I will give an introduction to the so-called implicit boundary integral method based on the co-area formula and it provides a simple quadrature rule for boundary integral on general surfaces.  Then, I will focus on the application of solving the linearized Poisson Boltzmann equation, which is used to model the electric potential of protein molecules in a solvent. Near the singularity, I will briefly discuss the choices of regularization/correction and illustrate the effect of both cases. In the end, I will show the numerical error estimate based on the harmonic analysis tools.
Madison Ihrig, Columbia University, “Synchronous Optimal Control of Influencers Associated With Graph Spreading Dynamic”

Abstract: We create and study a model that exerts control alongside network spreading dynamics in order to prioritize selected subgraph interactions. The formulation utilizes an SI infection model and driver node control over a fully connected graph. Analytical and computational tests demonstrate that the model is able to identify driver nodes to minimize the influence of selected nodes.

Johannes Krotz (University of Tennessee Knoxville), A Hybrid Monte Carlo, Discontinuous Galerkin method for linear kinetic
transport equations

Abstract:  We present a hybrid method for time-dependent particle transport problems that combines Monte Carlo
(MC) estimation with deterministic solutions based on discrete ordinates. For spatial discretizations, the MC
algorithm computes a piecewise constant solution and the discrete ordinates uses bilinear discontinuous finite
elements. From the hybridization of the problem, the resulting problem solved by Monte Carlo is scattering
free, resulting in a simple, efficient solution procedure. Between time steps, we use a projection approach
to “relabel” collided particles as uncollided particles. From a series of standard 2-D Cartesian test problems
we observe that our hybrid method has improved accuracy and reduction in computational complexity of
approximately an order of magnitude relative to standard discrete ordinates solutions.