报告题目: Convergence Theory of Deep Neural Networks
报告人: 张海樟教授,中山大学
报告时间: 2022年6月24日(周五)15:00-16:30
腾讯会议:640-148-252
邀请人:林荣荣
报告摘要:In the past few years, deep learning has achieved great successes for a wide range of machine learning problems including face recognition, speech recognition, game intelligence, natural language processing, and autonomous navigation. Compared to the achievements in engineering and applications, research on the mathematical theory of deep neural networks is still at its infancy. By far, most mathematical studies on the nonlinear function representation system of deep neural networks have been focusing on the expressive power of the system. There is little attention paid to the relationship between the parameters (weight matrices and bias vectors) and the convergence or convergence rate of the network.
On the other hand, convergence of a function representation system in terms of its parameters has always been a fundamental problem in pure and applied mathematics. A celebrated example is the Carleson theorem, which states that a Fourier series converges almost everywhere if its coefficients are square-summable. We aims at establishing a convergence theory, which provides characterization of the convergence and convergence rate of a deep neural network in terms of its parameters. In this talk, we present results for deep ReLU networks and deep convolutional neural network. The talk is based on joint work with Prof. Yuesheng Xu and my PhD student Wentao Huang.
专家简介:张海樟,中山大学珠海校区数学学院教授。研究兴趣包括学习理论、应用调和分析和函数逼近。代表性成果有再生核的Weierstrass逼近定理, 以及在国际上首创的再生核巴拿赫空间理论。以再生核巴拿赫空间为基础的心理学分类方法入选剑桥大学出版社的《数学心理学新手册》。在Journal of Machine Learning Research、Applied and Computational Harmonic Analysis、Neural Computation、Neurocomputing、Journal of Approximation Theory等发表多篇原创性工作,单篇最高他引超过200次。主持包括优秀青年基金在内的四项国家基金。