Keynotes

Mark Girolami

Mark Girolami is a Computational Statistician having ten years experience as a Chartered Engineer within IBM. In March 2019 he was elected to the Sir Kirby Laing Professorship of Civil Engineering (1965) within the Department of Engineering at the University of Cambridge where he also holds the Royal Academy of Engineering Research Chair in Data Centric Engineering. Girolami takes up the Sir Kirby Laing Chair upon the retirement of Professor Lord Robert Mair. Professor Girolami is a fellow of Christ’s College Cambridge.

Prior to joining the University of Cambridge Professor Girolami held the Chair of Statistics in the Department of Mathematics at Imperial College London. He was one of the original founding Executive Directors of the Alan Turing Institute the UK’s national institute for Data Science and Artificial Intelligence, after which he was appointed as Strategic Programme Director at Turing, where he established and led the Lloyd’s Register Foundation Programme on Data Centric Engineering. Since October 2021 he serves as the Chief Scientist of the Alan Turing Institute.

Professor Girolami is an elected fellow of the Royal Society of Edinburgh, he was an EPSRC Advanced Research Fellow (2007-2012), an EPSRC Established Career Research Fellow (2012-2018), and a recipient of a Royal Society Wolfson Research Merit Award.

He delivered the IMS Medallion Lecture at the Joint Statistical Meeting 2017, and the Bernoulli Society Forum Lecture at the European Meeting of Statisticians 2017.

In 2020 Professor Girolami delivered the BCS and IET Turing Talk in London, Manchester, and Belfast.

Professor Girolami currently serves as the Editor-in-Chief of Statistics and Computing and the new open access journal Data Centric Engineering published by Cambridge University Press.

Title: The Statistical Finite Element Method 

Abstract: The finite element method (FEM) is one of the great triumphs of applied mathematics, numerical analysis and software development. Recent developments in sensor and signalling technologies enable the phenomenological study of complex natural and physical systems. The connection between sensor data and FEM has been restricted to solving inverse problems placing unwarranted faith in the fidelity of the mathematical description of the system under study. If one concedes mis-specification between generative reality and the FEM then a framework to systematically characterise this uncertainty is required. This talk will present a statistical construction of the FEM which systematically blends mathematical description with data observations by endowing the Hilbert space of FEM solutions with the additional structure of a Probability Measure.

Narges Razavian

Narges is an assistant professor in the Departments of Population Health and Radiology conducting research in the Center for Healthcare Innovation and Delivery Science (CHIDS), and a member of its Predictive Analytics Unit.

Her lab's research is focused on the intersection of machine learning, artificial intelligence, and medicine. Using millions of records in the Electronic Health Records database at NYU Langone, as well as hundreds of thousands of imaging and millions of genomic data points, they focus on a number of important topics, including but not limited to: prediction of upcoming preventable conditions and events using machine learning and data science, discovery of disease subtypes using radiology and pathology imaging and electronic records, discovery of existing but undiagnosed medical conditions using electronic health records, and, the discovery of biomarkers and factors associated with important outcomes, etc.

 Title: Fair Self supervised Learning in multiple modalities (Imaging, EHR, and in combination) with Applications to Medicine

Abstract: Recent progress in self-supervised learning (SSL), and availability of large clinical datasets that include millions of records of medical imaging and electronic health records (EHRs) provide an untapped opportunity to improve representation learning that is crucial to almost all medical predictive models. Learning fair and strong representations via SSL is still an under-explored and challenging problem, and this issue is specially important in medicine, where there is an imbalance in amount of data available for each sub-populations. Additionally, working with diverse modalities in medicine (EHRs, 3D imaging, Large Histopathology images) impose several unsolved challenges. In this talk, I will share a collection of recent work from my team, addressing many of the methodological shortcomings of state of the art methods. Specifically, I will present novel methods to improve self supervised learning in medical imaging and EHR settings; Novel methods to improve fairness of the SSL trained method; and we will end by discussing future directions.

Gitta Kutyniok

Gitta Kutyniok (https://www.ai.math.lmu.de/kutyniok) currently has a Bavarian AI Chair for Mathematical Foundations of Artificial Intelligence at the Ludwig-Maximilians-Universität München. She received her Diploma in Mathematics and Computer Science as well as her Ph.D. degree from the Universität Paderborn in Germany, and her Habilitation in Mathematics in 2006 at the Justus-Liebig Universität Gießen. From 2001 to 2008 she held visiting positions at several US institutions, including Princeton University, Stanford University, Yale University, Georgia Institute of Technology, and Washington University in St. Louis. In 2008, she became a full professor of mathematics at the Universität Osnabrück, and moved to Berlin three years later, where she held an Einstein Chair in the Institute of Mathematics at the Technische Universität Berlin and a courtesy appointment in the Department of Computer Science and Engineering until 2020. In addition, Gitta Kutyniok held an Adjunct Professorship in Machine Learning at the University of Tromso from 2019 until 2023. 

Gitta Kutyniok has received various awards for her research such as an award from the Universität Paderborn in 2003, the Research Prize of the Justus-Liebig Universität Gießen and a Heisenberg-Fellowship in 2006, and the von Kaven Prize by the DFG in 2007. She was invited as the Noether Lecturer at the ÖMG-DMV Congress in 2013, a plenary lecturer at the 8th European Congress of Mathematics (8ECM) in 2021, and the lecturer of the London Mathematical Society (LMS) Invited Lecture Series in 2022. She was also honored by invited lectures at both the International Congress of Mathematicians 2022 (ICM 2022) and the International Congress on Industrial and Applied Mathematics (ICIAM 2023). Moreover, she was elected as a member of the Berlin-Brandenburg Academy of Sciences and Humanities in 2017 and of the European Academy of Sciences in 2022, and became a SIAM Fellow in 2019 and an IEEE Fellow in 2024. She currently acts as LMU-Director of the Konrad Zuse School of Excellence in Reliable AI (relAI) in Munich, serves as Vice President-at-Large of SIAM, and is spokesperson of the DFG-Priority Program "Theoretical Foundations of Deep Learning" and of the AI-HUB@LMU, which is the interdisciplinary platform for research and teaching in AI and data science at LMU. 

Gitta Kutyniok's research work covers, in particular, the areas of applied and computational harmonic analysis, artificial intelligence, compressed sensing, deep learning, imaging sciences, inverse problems, and applications to life sciences, robotics, and telecommunication.

Title: Reliability of Artificial Intelligence: Chances and Challenges

Abstract: Artificial intelligence is currently leading to one breakthrough after the other, both in public life with, for instance, autonomous driving and speech recognition, and in the sciences in areas such as medical imaging or molecular dynamics. However, one current major drawback worldwide, in particular, in light of regulations such as the EU AI Act and the G7 Hiroshima AI Process, is the lack of reliability of such methodologies.

In this talk, we will provide an introduction into this vibrant research area, focussing specifically on deep neural networks. We will discuss the role of a theoretical perspective to this highly topical research direction, and survey the current state of the art in areas such as explainability. Finally, we will also touch upon fundamental limitations of neural network-based approaches.

Aasa Feragen

Aasa Feragen is a rogue mathematician who has worked at the intersection of machine learning and medical imaging since 2009. She is also a professor at the Technical University of Denmark. Her MSc and PhD in mathematics are both from the University of Helsinki, following which she held postdocs at the University of Copenhagen, and the MPI for Intelligent Systems in Tübingen. 

While interpretability and explainability have been at the heart of Aasa's research from the start, her more recent interests include uncertainty modelling and algorithmic fairness. Her work ranges from quantification and communication of uncertainty in brain imaging, via mathematical modelling for algorithmic fairness in medicine, to developing explainable AI algorithms for clinical use.


Title: How do we ensure that Trustworthy AI remains trustworthy?

Abstract: "Trustworthy AI" encompasses a range of approaches designed to promote safe and responsible use of AI. Prominent subfields include algorithmic fairness, explainable AI (XAI), and uncertainty quantification. As new legislation increasingly enforces safe use of AI, we will also see an increased use of trustworthy AI tools to justify and promote AI products. But is "Trustworthy AI" always trustworthy? In this talk we will use technical cases from algorithmic fairness, XAI and uncertainty quantification to discuss potential pitfalls in the use of Trustworthy AI tools, potential safeguards, as well as (interesting!) open problems for our community as we move into an increasingly AI-powered future.