Deep neural networks obtain impressive results for image, sound and language recognition or to address complex problems in physics. They are partly responsible for the renewal of artificial intelligence. Yet, we do not understand why they can work so well and why they sometimes fail, which raises many problems of robustness and explainability.
Recognizing or classifying data amounts to approximate phenomena which depend on a very large number of variables. The combinatorial explosion of possibilities makes it potentially impossible to solve. One can learn from data only if the problem is highly structured. Deep neural networks appear to take advantage of the existence of such structures, whose nature seem to be similar across a wide range of applications. Understanding this “architecture of complexity” involves many branches of mathematics and is related to open questions in physics. I will discuss some approaches and show applications.
Dr. Mallat is known for his fundamental work in wavelet theory, with major impact in signal processing, music synthesis and image segmentation. He is a member of the French Academy of Sciences and the US National Academy of Engineering.