In environmental modeling, we have access to a multitude of data sources of different type, resolution and temporal granularity. How to integrate them in a single framework is still an open question.
In this presentation, Devis Tuia will present recent advances in environmental data analysis based on the integration of multiplle data streams, synergised thanks to masked autoencoders. The presented models, SMARTIES and MASKSDM, will tackle different open issues, from the classification and segmentation of remote sensing data to the prediction of species suitability maps, a challenging open problem in ecology.
About the speaker
Devis Tuia completed his PhD at University of Lausanne, Switzerland, where he studied kernel methods for hyperspectral satellite data. He then traveled the world as a postdoc, first at University of València, then at CU Boulder and finally back to EPFL. In 2014, he became assistant professor at University of Zurich, and in 2017 he moved to Wageningen University in the Netherlands, where he was chair of the Geo-Information Science and Remote Sensing Laboratory. Since September 2020, he is back to EPFL, where he leads the Environmental Computational Science and Earth Observation laboratory (ECEO) in Sion. There, he studies the Earth from above with machine learning and computer vision.
Is self-supervised learning still used and useful or can we stop thinking about it?
In this talk, we will explore how self-supervised learning has become 'just another' tool in the deep learning toolbox and where it is headed next. We will explore recent innovations in pre- and post-training for images, videos, image-text and point cloud data.
About the speaker
Yuki M. Asano leads the Fundamental AI (FunAI) Lab at the University of Technology Nuremberg as a full Professor, having previously led the QUVA lab at the University of Amsterdam, where he collaborated with Qualcomm AI Research. He completed his PhD at Oxford's Visual Geometry Group (VGG), working with Andrea Vedaldi and Christian Rupprecht. His lab conducts research at the cutting edge of computer vision and machine learning, particularly self-supervised and multimodal learning. He has recently promoted an ELLIS scholar, and he has served as area chair and senior area chair for top conferences including NeurIPS, ICLR, and CVPR, and organizes workshops and PhD schools including the ELLIS winter school on Foundation Models.
Understanding why machine learning models make predictions, cluster assignments, or outlier scorings is essential for building trustworthy systems.
In this keynote, I highlight recent work in Explainable AI (XAI) on robust and faithful explanation methods that handle complex learned decision boundaries, exploiting model structure in supervised and unsupervised model settings.
By connecting these perspectives, I discuss common challenges in explaining behavior that require structure-aware solutions for reliable explanations.
About the speaker
Ira Assent is full professor of computer science at Aarhus University, Denmark, where she heads the Data-Intensive Systems research group, and the Big Data Analysis research within the DIGIT Aarhus University Centre. She is a co-lead of the Collaboratory Causality & Explainability in the Pioneer Center for Artificial Intelligence Denmark. She is a (part-time) director of the Institute for Advanced Simulation, Data Analytics and Machine Learning (IAS-8) at the Forschungszentrum Juelich, Germany. She received the PhD degree from RWTH Aachen University, Germany, in 2008. Her research interests include explainable AI, unsupervised and supervised learning, efficient and scalable algorithms for data analysis and data management.
This talk explores how machine learning and deep learning are being applied to real-world challenges at Equinor, Norway’s largest energy company. Moving beyond theoretical concepts, we'll examine actual implementations of deep learning models in industrial settings, showing how these technologies solve practical problems at scale.
We'll dive into specific use cases, demonstrating not just the models themselves, but how they're integrated into complete systems—from data pipelines to production infrastructure. The journey from concept to deployment will be covered through the lens of Technology Readiness Levels (TRL), highlighting the practical considerations of moving AI from prototypes to operational environments.
Special attention will be given to how we incorporate responsibility by design for our industrial AI applications, where safety and reliability are paramount. Through concrete examples, attendees will gain insight into what it really takes to deploy deep learning solutions in a large-scale industrial context, including the challenges faced and lessons learned along the way.
About the speaker
Shaheen Syed is Task Lead Computer Vision and Principal Data Scientist at Equinor, Norway's largest energy company, where he has been leading computer vision initiatives since 2022. He leads several projects from prototype to deployment in production, developing and applying machine learning methods for autonomous inspection and anomaly detection with computer vision and robotics. After completing his PhD at Utrecht University in the Netherlands in 2019, where he specialized in natural language processing and machine learning, he moved to Norway as a researcher at the University of Tromsø, working on timeseries analysis for physical activity research. He then became a postdoc at Nofima, a research institute in Tromsø, where he developed deep learning models for MRI and hyperspectral data. Following his academic career, he transitioned to industry to apply his expertise in machine learning and computer vision to real-world challenges in the energy sector.