小猫咪加速器打不开了-雷轰加速器
Venkat Chandrasekaran
Mathieu Desbrun
Thomas Hou
Houman Owhadi
Peter Schröder
Andrew Stuart
Joel Tropp
小猫咪加速器打不开了-雷轰加速器
Franca Hoffmann
Ka Chun Lam
小猫咪加速器打不开了-雷轰加速器
Alfredo Garbuno-Inigo
Bamdad Hosseini
Pengfei Liu
Krithika Manohar
Melike Sirlanci
小猫咪加速器打不开了-雷轰加速器
Max Budninskiy
Utkan Candogan
JiaJie Chen
De Huang
Nikola Kovachki
Matt Levine
Riley Murray
Florian Schaefer
Yong Shen Soh
Yousuf Soliman
Armeen Taeb
Gene R. Yoo
Shumao Zhang
|
小猫咪加速器打不开了-雷轰加速器
(Will be held at 12 noon in Annenberg 213, unless otherwise specified.)
September 25, 2024
Jose Antonio Carrillo
▦ Primal dual methods for Wasserstein gradient flows ▦ Combining the classical theory of optimal transport with modern operator splitting techniques, I will present a new numerical method for nonlinear, nonlocal partial differential equations, arising in models of porous media,materials science, and biological swarming. Using the JKO scheme, along with the Benamou-Brenier dynamical characterization of the Wasserstein distance, we reduce computing the solution of these evolutionary PDEs to solving a sequence of fully discrete minimization problems, with strictly convex objective function and linear constraint. We compute the minimizer of these fully discrete problems by applying a recent, provably convergent primal dual splitting scheme for three operators. By leveraging the PDE’s underlying variational structure, ourmethod overcomes traditional stability issues arising from the strong nonlinearity and degeneracy, and it is also naturally positivity preserving and entropy decreasing. Furthermore, by transforming the traditional linear equality constraint, as has appeared in previous work, into a linear inequality constraint, our method converges in fewer iterations without sacrificing any accuracy. Remarkably, our method is also massively parallelizable and thus very efficient in resolving high dimensional problems. We prove that minimizers of the fully discrete problem converge to minimizers of the continuum JKO problem as the discretization is refined, and in the process, we recover convergence results for existing numerical methods for computing Wasserstein geodesics. Finally, we conclude with simulations of nonlinear PDEs and Wasserstein geodesics in one and two dimensions that illustrate the key properties of our numerical method.
October 16, 2024
Rupert Frank
▦ A `Liquid-Solid' Phase Transition in a Simple Model for Swarming ▦ We consider a non-local shape optimization problem, which is motivated by a simple model for swarming and other self-assembly/aggregation models, and prove the existence of different phases. In particular, we show that in the large mass regime the ground state density profile is the characteristic function of a round ball. An essential ingredient in our proof is a strict rearrangement inequality with a quantitative error estimate.
The talk is based on joint work with E. Lieb.
October 23, 2024
Steven Low
▦ Mitigation of Cascading Failures in Power Systems ▦ Line failure in power grid propgates in non-local, intricate and counterintuitive
ways because of the interplay between power flow physics and network topology,
making the mitigation of cascading failure difficult. The conventional approach to
grid reliability is through building redundant lines. In this talk, we present an opposite
approach to grid reliability through failure localization, by judiciously removing
lines and adopting a new class of frequency control algorithms at real time. The
topology design partitions the network into regions that are connected in a tree
structure. The frequency control automatically adjusts controllable generators
and loads to minimize disruption and localize failure propagation. This approach
is derived from a spectral view of power flow equations that relates failure
propagation to the graphical structure of the grid through its Laplacian matrix.
We summarize the underlying theory and present simulation results that demonstrate
that our approach not only localizes failure propagation, as promised by the theory,
but also improves overall grid reliability even though it reduces line redundancy.
(Joint work with Daniel Guo, Chen Liang, Alessandro Zocca, and Adam Wierman)
October 30, 2024
Gianluca Favre
▦ Kinetic model with thermalization for a gas with total energy conservation ▦
We consider the thermalization of a gas towards a Maxwellian velocity distribution which depends locally on the temperature of the background. The exchange of kinetic and thermal energy between the gas and the background drives the system towards a global equilibrium with constant temperature. The heat flow is governed by the Fourier's law.
Mathematically we consider a coupled system of nonlinear kinetic and heat equations where in both cases we add a term that describes the energy exchange. For this problem we are able to prove existence of the solution in 1D, exponential convergence to the equilibrium through a hypocoercivity technique, macroscopic limit toward a cross-diffusion system. In the last two cases a perturbative approach is taken into account. It's worth noticing that also without heat conductivity we can show the temperature diffusion thanks to the transport of energy. It is also interesting to show that the thermalization is highly influenced by the background temperature. All these aspects have been investigated also from a numerical viewpoint in order to provide simulations in 2D.
January 22, 2024
Richard Kueng
▦ Binary Component Decomposition of Matrices ▦ We study the problem of decomposing a low-rank matrix into a factor with binary entries, either from {±1} or from {0,1}, and an unconstrained factor. This research answers fundamental questions about the existence and uniqueness of these decompositions. It also leads to tractable factorization algorithms that succeed under a mild deterministic condition.
This is joint work with Joel Tropp (Caltech)
January 29, 2024
Sven Wang
▦ Statistical Guarantees for MAP Estimators in PDE-Constrained Regression Problems ▦
The main topic of the talk are convergence rates for penalised least squares (PLS) estimators in non-linear statistical inverse problems, which can also be interpreted as Maximum a Posteriori (MAP) estimators for certain Gaussian Priors. Under general conditions on the forward map, we prove convergence rates for PLS estimators.
In our main example, the parameter f is an unknown heat conductivity function in a steady state heat equation [a second order elliptic PDE]. The observations consist of a noisy version of the solution u[f] to the boundary value corresponding to f. The PDE-constrained regression problem is shown to be solved a minimax-optimal way.
This is joint work with S. van de Geer and R. Nickl. If time permits, we will mention some related work on the non-parametric Bayesian approach, as well as computational questions for the Bayesian posterior.
February 26, 2024
km810cm快猫破解 bbs4.cn
▦ An Optimal Transport Perspective on Uncertainty Propagation ▦
In many scientific areas, a deterministic model (e.g., a differential equation) is equipped with parameters. In practice, these parameters might be uncertain or noisy, and so an honest model should account for these uncertainties and provide a statistical description of the quantity of interest. Underlying this computational problem is a fundamental question - If two "similar" functions push-forward the same measure, are the new resulting measures close, and if so, in what sense? In this talk, I will first show how the probability density function (PDF) can be approximated, and present applications to nonlinear optics. We will then discuss the limitations of PDF approximation, and present an alternative Wasserstein-distance formulation of this problem, which through optimal-transport theory yields a simpler theory.
April 15, 2024
CANCELLED
▦ TBA ▦
April 22, 2024
CANCELLED
▦ TBA ▦
April 29, 2024
km810cm快猫破解 bbs4.cn
▦ TBA ▦
May 20, 2024
CANCELLED
▦ TBA ▦
小猫咪加速器打不开了-雷轰加速器
(Time and location vary)
August 22, 2024
• Special CMX Seminar •
Annenberg 213
12:00pm
Giacomo Garegnani
▦ Bayesian Inference of Multiscale Differential Equations ▦
Inverse problems involving differential equations defined on multiple scales naturally arise in several engineering applications. The computational cost due to discretization of multiscale equations can be reduced employing homogenization methods, which allow for cheaper computations. Nonetheless, homogenization techniques introduce a modelling error, which has to be taken into account when solving inverse problems. In this presentation, we consider the treatment of the homogenization error in the framework of inverse problems involving either an elliptic PDE or a Langevin diffusion process. In both cases, theoretical results involving the limit of oscillations of vanishing amplitude are provided, and computational techniques for dealing with the modelling error are presented.
November 19, 2024
• Special CMX Seminar •
Annenberg 213
4:30pm
Matthew Thorpe
▦ How Many Labels Do You Need For Semi-Supervised Learning? ▦
Given a data set of which a small subset are labelled, the goal of semi-supervised learning is to find the unknown labels. A popular method is to minimise a discrete p-Dirichlet energy defined on a graph constructed from the data. As the size of the data set increases one hopes that solutions of the discrete problem converge to a continuum variational problem with the continuum p-Dirichlet energy. It follows from Sobolev regularity that one cannot impose constraints if p is less than the dimension of the data hence, in this regime, one must also increase the number of labels in order to avoid labels "dissappearing" in the limit. In this talk I will address the question of what is the minimal number of labels. To compare labelling functions on different domains we use a metric based on optimal transport which then allows for the application of methods from the calculus of variation, in particular Gamma-convergence, and methods from PDE's, such as constructing barrier functions in order to apply the maximum principle. We can further show rates of convergence.
This is joint work with Jeff Calder (Minnesota) and Dejan Slepcev (CMU).
December 6, 2024
• CMX Special Seminar •
Annenberg 213
4:00pm
Jose Antonio Carrillo
▦ Consensus Based Models and Applications to Global Optimization ▦
We introduce a novel first-order stochastic swarm intelligence (SI) model in the spirit of consensus formation models, namely a consensus-based optimization (CBO) algorithm, which may be used for the global optimization of a function in multiple dimensions. The CBO algorithm allows for passage to the mean-field limit, which results in a nonstandard, nonlocal, degenerate parabolic partial differential equation (PDE). Exploiting tools from PDE analysis we provide convergence results that help to understand the asymptotic behavior of the SI model. We further present numerical investigations underlining the feasibility of our approach with applications to machine learning problems.
December 16, 2024
• CMX Special Seminar •
Annenberg 213
4:00pm
Zhenzhen Li
▦ A New Non-convex Optimization Framework For Low-rank Matrix Recovery with Mathematical Guarantee ▦
Recent years have witnessed growing importance of non-convex methods in many industrial and practical problems. Many low-rank related problems can be solved by reformulating them as nonconvex optimization problems. Surprisingly, these optimization problems usually do not have spurious local minima and all saddle points are strict under the Euclidean parameterized setting. Although a dimension-free polynomial convergence can be guaranteed for many problems, numerical experiments have demonstrated much better performance than what the current theory predicts. Different from previous non-convex methods (with weaker theoretical convergence guarantee) or convex relaxation (with heavier computational cost), in this talk, we will discuss a new global non-convex optimization framework for solving a general inverse problem of low-rank matrices under the Riemannian manifold setting. Given some random measurements of a low-rank matrix, how does a least square loss work via the Riemannian gradient descent (light computational cost version) with random initialization? Our analysis gives rigorous mathematical analysis with respect to both asymptotic convergence behavior and fast convergence rate under isometry or weaker isometry condtions. More specifically, under isometry case a low-rank matrix manifold with rank r (r << n) consists of 2^r branches. We will show that a random initialization with probability 1 falls into an intrinsic branch. Further, it needs O(log n+log(1/epsilon)) iterations to generate an epsilon-accuracy solution. Similar results also hold for low rank matrix recovery by given some random information under mild conditions (weaker isometry case). Potential applications include but not limited to low-rank matrix related problems, such as matrix sensing, matrix completion, low-rank Hankel matrix recovery, phase retrieval, robust PCA, and non-convex flow, etc.
January 10, 2024
• CMX Special Seminar •
Annenberg 213
12:00pm
Mitchell Luskin
▦ Twistronics: manipulating the spectrum of Hamiltonian operators for two-dimensional layered structures through their twist angle ▦
Stacking and twisting a few layers of 2D materials such as graphene opens the possibility of tuning the electronic and optical properties of 2D materials. One of the main issues encountered in the modeling of 2D heterostructures is that lattice mismatch and rotations between the layers destroys the periodic character of the system. I will present basic concepts and efficient computational methods for mechanical relaxation, electronic density of states, and conductivity in the incommensurate setting.
Superconductivity has recently been discovered in twisted bilayer graphene at a "magic" twist angle with an isolated "flat band." I will describe our search for superconductivity in twisted trilinear graphene in collaboration with the experimental group of Ke Wang by computing its mechanical relaxation and the spectrum (band structure) of its Hamiltonian.
January 17, 2024
• CMX Special Seminar •
Annenberg 213
4:30pm
Sergios Agapiou
▦ Posterior Contraction Rates for Some Possibly Nonlinear Inverse Problems ▦
We will consider a family of possibly nonlinear inverse problems subject to Gaussian additive white noise. We will assume truncated Gaussian priors and our interest will be in studying the asymptotic performance of the Bayesian posterior in the small noise limit. In particular, we will develop a theory for obtaining posterior contraction rates. The theory is based on the techniques of Knapik and Salomond 2018, which show how to derive posterior contraction rates for inverse problems, using rates of contraction for direct problems and the notion of the modulus of continuity. We will work under the assumption that the forward operator can be associated to a linear operator in a certain sense. We will present techniques from regularization theory, which allow both to bound the modulus of continuity, as well as to derive optimal rates of contraction for the direct problem by appropriately tuning the prior-truncation level. Finally, we will combine to obtain optimal rates of contraction for a range of inverse problems.
This is joint work with Peter Mathe (Weierstrass Institute, Berlin)
March 3, 2024
• CMX Special Seminar •
Annenberg 213
4:30pm
km810cm快猫破解 bbs4.cn
▦ Mathematical and Computational Aspects of Imaging with Waves ▦
Wave based imaging is an inverse problem for a wave equation or a system of
equations with a wide range of applications in nondestructive testing of structures such as airplane wings, ultrasound for medical diagnosis, radar, sonar, geophysical exploration, etc. It seeks to determine scattering structures in a medium, modeled mathematically by a reflectivity function, from data collected by sensors that probe the medium with signals and measure the resulting waves. Most imaging methods formulate the inverse problem as a least squares data fit optimization, and assume a linear mapping between the unknown reflectivity and the data. The linearization, known as the Born (single scattering) approximation is not accurate in strongly scattering media, so the reconstruction of the reflectivity may be poor. I will describe a new inversion methodology that is based on a reduced order model approach. This borrows ideas from dynamical systems, where the reduced order model is a projection of an operator, called wave propagator, which describes the propagation of the waves in the unknown medium. I will explain how such a reduced order model can be constructed from measurements at the sensors and then I will show how it can be used for improving the existing inversion methodology.
小猫咪加速器打不开了-雷轰加速器
km810cm快猫破解 - 18h0a.cn:2021-6-24 · km810cm快猫破解软件中心提供2021年最新6.2.6.0官方正式版逍遥安卓高速下载,18h0a.cn本正式版逍遥安卓软件安全认证,免费无插件。
Southern California Applied Mathematics Symposium (SOCAMS 2024)
• Meeting Poster
|
Past Events
Lunch Seminars
km810cm快猫破解 bbs4.cn
AY 2018/19
Other Seminars
AY 2017/18
km810cm快猫破解 bbs4.cn
Meetings & Workshops
km810cm快猫破解 bbs4.cn
km810cm快猫破解 bbs4.cn
|