Martin Danelljan

Group leader and Lecturer, ETH Zurich

Martin Danelljan is a group leader and lecturer at ETH Zurich, Switzerland. He received his Ph.D. degree from Linköping University, Sweden in 2018. His Ph.D. thesis was awarded the biennial Best Nordic
Thesis Prize at SCIA 2019. He has an h-index of 42 and over 22k citations. His research interests include, tracking, segmentation, deep probabilistic models, image enhancement and generation. His research in the field of visual tracking, in particular, has attracted much attention, achieving first rank in the 2014, 2016, and 2017 editions of the Visual Object Tracking (VOT) Challenge and the OpenCV State-of-the-Art Vision Challenge. He received the best paper award at ICPR 2016, the best student paper at BMVC 2019, and an outstanding reviewer award at ECCV 2020. He is also a co-organizer of the VOT, NTIRE, and AIM workshops. He served as an Area Chair for CVPR 2022 and Senior PC for AAAI 2022.

Tracking and segmentation in videos

Video object tracking is one of the fundamental problems in computer vision. While humans excel at this task, requiring little effort to perform accurate and robust visual tracking, it has proven difficult to automate. It therefore remains as one of the most long standing and active topics in the field. Object tracking itself includes a diverse family of tasks. Multiple object tracking (MOT) aims to track all objects from a given set of object classes. Visual object tracking (VOT) instead focuses on tracking any type of object identified by the user, requiring strong few-shot learning capabilities. Recently, the open vocabulary and open world tracking tasks has extended the previous paradigms by allowing the user to input any textual description of the types of objects to track. Lastly, the video object and instance segmentation tasks further introduce the use of segmentation masks in order to represent the object state in an image. In this talk, we will cover the aforementioned topics in video object tracking and discuss the most important challenges and current research directions.

Main Partners


Other Partners