Winter School

NLDL 2023 Winter School

The winter school will consist of tutorials by experts in the field and is co-hosted by NORA as part of the NORA Research School. A preliminary program is provided below.

Note: UiT The Arctic University of Norway will award 5 ECTs for the Winter School. In order to receive the credits, participants will have to register at UiT before 1st of December. More information on this can be found in the Registration form.


Course Description

Deep learning is a rapidly growing segment of machine learning. It is increasingly used to deliver near-human level accuracy for many tasks such as image classification, voice recognition and natural language processing. Applications areas include facial recognition, scene detection, advanced medical and pharmaceutical research, and autonomous, self-driving vehicles.

This 5-day course will provide a basic understanding of the techniques used in this field and will review recent developments in the field. Covering, among others, deep learning for: Reinforcement Learning, Self-supervised Learning, Explainability, Computer Vision, and Natural Language Processing.

The course will further include a practical component, where students are educated in how the national resources can be used for Deep Learning and how these techniques can be implemented.


Preliminary Program

Monday January 9th:

09:00: Registration

09:15: Opening

09:30: A gentle introduction to Deep Reinforcement Learning (Rudolf Mester and Even Klemsdal, NTNU Trondheim)

Reinforcement Learning has emerged as a powerful technique in modern machine learning, allowing a system to learn through trial and error. We will introduce some fundamental principles upon which this family of methods is based. In the first part, we give a summary of classical reinforcement learning and thus provide the essentials for understanding Deep Reinforcement Learning (DRL). In the second part, we shift the focus onto DRL and look at how deep learning enhances classical reinforcement learning, leading to powerful new algorithms. These algorithms include DQN playing Atari games at a superhuman level, PPO solving robot locomotion problems, and Alpha Zero learning to play GO, Chess, and Shogi. In the limited time of such a mini-tutorial, we can of course only sketch the main characteristics, but we intend to motivate to look deeper into this fascinating family of methods.

11:00: Self-Supervised Learning: Training Targets and Loss Functions (Zheng-Hua Tan, Aalborg University)

Humans learn much under supervision but even more without. Such will apply to machines. Self-supervised learning is paving the way by leveraging unlabeled data which is vastly available. In this emerging learning paradigm, deep representation models are trained by supervised learning with supervisory signals (i.e., training targets) derived automatically from unlabeled data itself. It is abundantly clear that such a learnt representation can be useful for a broad spectrum of downstream tasks with the need of significantly less supervised data.

As in supervised learning, key considerations in devising self-supervised learning methods include training targets and loss functions. The difference is that training targets for self-supervised learning are not pre-defined and greatly dependent on the choice of pretext tasks. This leads to a variety of novel training targets and their corresponding loss functions. This tutorial aims to provide an overview of training targets and loss functions developed in the domains of speech, vision and text. Further, we will discuss some open questions, e.g., transferability, and under-explored problems, e.g., learning across modalities. For example, the pretext tasks, and thus the training targets, can be drastically distant from the downstream tasks. This raises the questions like how transferrable the learnt representations are, how to choose training targets and how to explore representations. We will also discuss the link between semi-supervised and self-supervised learning.

12:15: Lunch

13:30: The Challenge of Unverfiability in eXplainable AI Evaluation (Anna Hedstrøm, TU Berlin)
In recent years, the interest in eXplainable AI (XAI) techniques has undoubtedly exploded and countless methods and toolboxes are now at the disposal of XAI researchers, ML practitioners and data scientists alike. Despite my shared enthusiasm for this productive development, until recently, the topic of XAI evaluation was grossly understudied — and caused confusion about which explanation methods work and under what conditions. In this tutorial, we will take an in-depth look at some of the most recent developments in XAI evaluation and review the solutions that the community has put forward to circumvent the fundamental problem of XAI — the lack of ground-truth explanations. In addition to this, we will discuss
The Challenge of Unverfiability — why evaluation is so difficult to get right (and what we can do about it). At the end of the tutorial, a brief yet hands-on introduction to Quantus will be given, which is an open-source library intended for XAI researchers to evaluate local neural network explanations.

15:00: Tutorial 4 (Line Clemmensen, Technical University of Denmark)

16:30: Early career presentations/posters + mingling & finger food


January 10-12th: Main Conference Program


Friday January 13th:

09:00: Tutorial 6 (NRIS): High Performance Computing for Deep Learning Part 1

The tutorial will contain the following topics:

  • Transition from a Laptop to a central server to a compute cluster.

  • The SLURM queue manager.

  • Software access on a shared resource.

  • Example job scripts.

  • Understanding resources.

10:40: Tutorial 7 (NRIS): High Performance Computing for Deep Learning Part 2

The tutorial will contain the following topics:

  • Introducing the Deep Learning example that will provide the context for this tutorial.

  • How to access GPUs.

  • Data staging.

  • Submit the analysis as a HPC job.

  • Monitor job progress, debug errors and Interpret results.

  • Summary.

12:00: Lunch

13:00: Topic: Representation learning and learning with few data (Marcus Liwicki, Luleå University of Technology)

Deep Neural Networks are data hungry, they require millions of labelled data in order to work! --- Really? --- The last decade has shown useful approaches to work with less labelled data, either by having a lot of data from a similar domain or by letting the network learn meaningful representations without explicit supervision. On Monday, you learned already about self-supervised learning. This tutorial first brings it in to general perspective of learning with few data, covering typical transfer learning and auto-encoder approaches or perceptual loss. Furthermore, the tutorial will investigate some typical (mis-) conceptions of these methods and suggests some practical tips on how to learn with few data. By participating in this tutorial, you will get deep insights in representation learning and learning with few data, as well as practical tools to start working on data in your own domain.


Tutorial Speakers

Zheng-Hua Tan

Zheng-Hua Tan is a Professor in the Department of Electronic Systems and a Co-Head of the Centre for Acoustic Signal Processing Research at Aalborg University, Aalborg, Denmark. He is also a Co-Lead of the Pioneer Centre for AI, Denmark. He was a Visiting Scientist at the Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, USA, an Associate Professor at the Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai, China, and a postdoctoral fellow at the AI Laboratory, KAIST, Daejeon, Korea. His research interests are centred around deep representation learning and generally include machine learning, deep learning, speech and speaker recognition, noise-robust speech processing, and multimodal signal processing. He is the Chair of the IEEE Signal Processing Society Machine Learning for Signal Processing Technical Committee (MLSP TC). He serves on the Conferences Board and the Technical Directions Board of the IEEE Signal Processing Society. He is an Associate Editor for the IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH AND LANGUAGE PROCESSING. He has served as an Associate/Guest Editor for several other journals. He was the General Chair for IEEE MLSP 2018 and a TPC Co-Chair for IEEE SLT 2016.

Rudolf Mester

Rudolf Mester has been head of the Visual Sensorics and Information Processing Lab at Goethe University, Frankfurt, since 2004, after having had positions in the Physics department of U Frankfurt, and in Bosch Corporate Research. Since October 2018, he is with the Computer Science Dept. (IDI) at NTNU Trondheim, Full Professor (DNV GL endowment), and member of the Norwegian Open AI Lab. In the recent decade, he lead research initiatives and projects for intelligent visual vehicle sensorics, for applications of AI in mobile systems, visual surround sensing for autonomous driving, and machine learning for ‘trainable vehicles’ and vehicle control. Besides these application-oriented topics, he performs fundamental research in the performance analysis of AI methods, assurance of machine learning algorithms, and in the foundations of robust and reliable perception and planning algorithms building on estimation theory, control theory, and machine learning. At NTNU, Rudolf Mester focuses on performance, confidence quantification, and assurance of AI / machine learning methods, on classical and ML-based perception approaches for intelligent machines, and on autonomous systems, on the road, on and under water, and in the air.

Marcus Liwicki

Marcus Liwicki received his M.S. degree in Computer Science from the Free University of Berlin, Germany, in 2004, his PhD degree from the University of Bern, Switzerland, in 2007, and his habilitation degree at the Technical University of Kaiserslautern, Germany, in 2011. Currently he is chaired professor in Machine Learning and vice-rector for AI at Luleå University of Technology. His research interests include machine learning, pattern recognition, artificial intelligence, human computer interaction, digital humanities, knowledge management, ubiquitous intuitive input devices, document analysis, and graph matching. From October 2009 to March 2010 he visited Kyushu University (Fukuoka, Japan) as a research fellow (visiting professor), supported by the Japanese Society for the Promotion of Science. In 2015, at the young age of 32, he received the ICDAR young investigator award, a bi-annual award acknowledging outstanding achievements of in pattern recognition for researchers up to the age of 40.

Anna Hedstrøm

Anna Hedström is currently pursuing her PhD at the Department of Machine Learning at TU Berlin and is a part of the independent research group, Understandable Machine Intelligence Lab (UMI Lab), which focuses on explainable AI. AH received her Master’s degree in Machine Learning at the Royal Institute of Technology (KTH) and studied engineering for her bachelor’s at University College London (UCL). Her current research interests include Explainable AI (XAI), Deep Learning, and in particular, Evaluation of XAI methods. AH is the main developer of the popular open-source library Quantus, which is a toolkit to evaluate local neural network explanations. Published in AAAI and invited reviewer for prestigious journals such as IEEE TNNLS. Previously, AH held teaching- and research assistantship positions and worked as a data scientist/ programmer in companies such as Klarna, Bosch Software Innovations, BCG and other ML start-ups. She is also a co-organizer of Women in Machine Learning and Data Science (WiMLDS) meet-ups in Berlin.

Line Clemmensen

Line Katrine Harder Clemmensen received the MSc degree in applied mathematics in 2004 and the PhD degree in statistical image analysis in 2010, both from the Technical University of Denmark (DTU). She has previously worked for Mærsk Transport and Logistics as a Principal Data Scientist and was appointed Associate Professsor at the Department of Applied Mathematics and Computer Science at DTU in 2013. Her research interests include machine learning and deep learning methodology with interests in explainable and fair AI and AI evaluation.