Andrew David Bagdanov
Information Engineering Department (DINFO) at University of Florence
Andrew D. Bagdanov received a PhD in computer science from the University of Amsterdam in 2004. He held postdoctoral positions at the University of Florence and Universitat Autònoma de Barcelona.
He was Senior Development Chief at the FAO of the United Nations, Head of Research Unit at the Media Integration and Communication Center, and Ramón y Cajal Fellow at the Computer Vision Center, Barcelona.
Since 2016, he is Associate Professor in the Information Engineering department at the Università degli Studi di Firenze. His research spans a broad spectrum of computer vision, image processing, and machine learning. He has held senior professional and academic positions in four countries, and has published over one hundred scientific articles in peer-reviewed, international journals and conference proceedings.
Lecture:
Self-supervised Learning: Getting More for Less our of your CNNs
Abstract:
Computer vision has undergone major transformations over the past five years.
The advent of viable, deep neural network architectures for visual recognition continues to radically reshape the landscape of the state-of-the-art in computer vision. These developments have brought amazing new possibilities and formidable new challenges to the table. In this two-hour lecture I will talk about a recent line of research addressing problems with the long-term sustainability and practical applicability of Convolutional Neural Networks (CNNs) for visual recognition. In particular, we will see how self-supervised learning approaches can be designed and implemented in order to mitigate the data-hungry nature of Convolutional Neural Networks and dramatically reduce the burden of supervision.
We will take an in-depth look at the design of proxy-tasks for supervision and how they can be exploited to learn semantically meaningful visual representations for recognition problems. Then we will see how to apply self-supervised learning to practical problems in niche domains where labeled data is scarce and expensive or laborious to collect.
Self-supervised learning, with carefully crafted proxy tasks, can address critical issues in training and deployment of state-of-the-art CNNs and enable one to get more from CNNs for less — that is, to obtain state-of-the-art results on multiple visual recognition tasks with fewer labeled training examples.