MDS20 Minitutorial: A Mathematical Perspective of Machine Learning by Weinan E
https://sinews.siam.org/Details-Page/mds20-virtual-talks-1
Abstract: The heart of modern machine learning is the approximation of high dimensional functions. Traditional approaches, such as approximation by piecewise polynomials, wavelets, or other linear combinations of fixed basis functions, suffer from the curse of dimensionality. We will discuss representations and approximations that overcome this difficulty, as well as gradient flows that can be used to find the optimal approximation. We will see that at the continuous level, machine learning consists of a series of reasonably nice variational and PDE-like problems. Modern machine learning models/algorithms, such as the random feature and shallow/deep neural network models, are all special discretizations of these continuous problems. We will also discuss how to construct new models/algorithms using the same philosophy. At the theoretical level, we will present a framework that is suited for analyzing machine learning models and algorithms in high dimension, and present results that are free of the curse of dimensionality. Finally, we will discuss the fundamental reasons that are responsible for the success of modern machine learning, as well as the subtleties and mysteries that still remain to be understood.
Weinan E, Princeton University, U.S., [email protected]
This is one of six minitutorial talks organized by Carola-Bibiane Schönlieb (University of Cambridge, United Kingdom) and Ozan Öktem (KTH Stockholm, Sweden) under the title “Deep Learning for Inverse Problems and Partial Differential Equations” as part of the 2020 SIAM Conference on Mathematics of Data Science. For more information, visit https://meetings.siam.org/sess/dsp_programsess.cfm?SESSIONCODE=68215.