Head of the Machine Learning Group at Fraunhofer Heinrich Hertz Institute, Berlin
Wojciech Samek is head of the Machine Learning Group at Fraunhofer Heinrich Hertz Institute, Berlin, Germany. He studied computer science at Humboldt University of Berlin from 2004 to 2010, was visiting researcher at NASA Ames Research Center, CA, USA, and received the Ph.D. degree in machine learning from the Technical University Berlin in 2014. He is PI at the Berlin Institute for Foundations of Learning and Data (BIFOLD), member of the European Lab for Learning and Intelligent Systems (ELLIS) and associated faculty at the DFG graduate school BIOQIC.
Furthermore, he is an editorial board member of DSP, PLoS ONE and IEEE TNNLS, and is an elected member of the IEEE MLSP Technical Committee. He is part of various standardization initiatives, including the MPEG-7 standardization on the compression of neural networks. He has organized special sessions, workshops and tutorials at top-tier machine learning and signal processing conferences (NIPS, CVPR, ICASSP, ICIP, EUSIPCO, ICANN, MICCAI), has received multiple best paper awards and has co-authored more than 100 journal and conference papers, predominantly in the areas deep learning, interpretable machine learning, neural network compression and federated learning.
He is also co-editor of the book “Explainable AI: Interpreting, Explaining and Visualizing Deep Learning”, LNCS, Springer, 2019.
Explainable AI: Methods, Applications & Extensions
Being able to explain the predictions of machine learning models is important in critical applications such as medical diagnosis or autonomous systems. The rise of deep nonlinear ML models has led to massive gains in terms of predictivity. Yet, we do not want such high accuracy to come at the expense of explainability. As a result, the field of Explainable AI (XAI) has emerged and has produced a collection of methods that are capable of explaining complex and diverse ML models.
This lecture gives a structured overview of the basic approaches that have been proposed for XAI. In particular, we present motivations for such methods, their advantages/disadvantages and their theoretical underpinnings. We also show how they can be extended and applied in a way that they deliver maximum usefulness in real-world scenarios.