IDA Machine Learning Seminars - Spring 2026

The IDA Machine Learning Seminars is a series of research presentations given by nationally and internationally recognized researchers in the field of machine learning.

  • You can subscribe to the email list used for announcing upcoming seminars here.
  • You can subscribe to the seminar series’ calendar using this ics link.

Wednesday, February 11 at 13:30, 2026
VIKING: Deep variational inference with stochastic projections
Samuel Matthiesen, Technical University of Denmark, Cognitive Systems

Abstract: Variational mean field approximations tend to struggle with contemporary overparameterised deep neural networks. Where a Bayesian treatment is usually associated with high-quality predictions and uncertainties, the practical reality has been the opposite, with unstable training, poor predictive power, and subpar calibration. Building upon recent work on reparameterisations of neural networks, we propose a simple variational family that considers two independent linear subspaces of the parameter space. These represent functional changes inside and outside the support of training data. This allows us to build a fully-correlated approximate posterior reflecting the overparameterisation that tunes easy-to-interpret hyperparameters. We develop scalable numerical routines that maximize the associated evidence lower bound (ELBO) and sample from the approximate posterior. Our results show that approximate Bayesian inference applied to deep neural networks is far from a lost cause when constructing inference mechanisms that reflect the geometry of reparametrisations.

Location: Alan Turing

Organizer: Louis Ohl


Wednesday, March 11 at 13:30, 2026
Why are inverse folding models good zero-shot predictors of protein thermodynamic stability?
Jes Frellsen, Technical University of Denmark, Cognitive Systems

Abstract: Inverse folding models are trained to recover sequences from structures, yet they have emerged as highly effective zero-shot predictors of protein stability. How can we understand this connection? In this talk, I unpack the theoretical assumptions that connect the amino acid preferences of an inverse folding model to the free-energy considerations that govern thermodynamic stability. Drawing on concepts from probability theory and statistical physics, I will show that commonly used heuristics can be interpreted as simplistic approximations and that more principled alternatives empirically yield considerable performance gains.

Bio: Jes Frellsen is an Associate Professor at the Technical University of Denmark (DTU). He received his PhD in Bioinformatics from the University of Copenhagen. Before joining DTU, he was a postdoc in the Machine Learning Group at the University of Cambridge and an Associate Professor at the IT University of Copenhagen. At DTU, he leads a research group on probabilistic machine learning and generative AI. His methodological contributions include work on missing data, uncertainty quantification, and out-of-distribution detection, with applications in bioinformatics, physics, recommender systems, and remote sensing. He has authored more than 70 research articles and book chapters, withnumerous at the premier machine learning venues. He contributes actively to the community through conference chairing and as a founding co-organiser of the GeMSS summer school on deep generative models. He was recently awarded the Jorck’s Foundation Research Prize for his contributions to methods for handling incomplete data and enabling the safe and reliable use of AI.

Location: Alan Turing

Organizer: Louis Ohl


Wednesday, April 08 at 13:30, 2026
Learning causally sound and interpretable composite endpoints for clinical trials
Fredrik D. Johansson, Chalmers University of Technology, Healthy AI Lab

Abstract: Randomized clinical trials are considered the gold standard evidence for learning about the causal effects of medical interventions, but have natural limitations on scope and length. This often rules out targeting long-term outcomes of interest, such as mortality or cardiovascular disease, as these endpoints won’t be observed for most participants during the length of the trial. Instead, researchers turn to surrogate endpoints that are associated with the primary outcome of interest and can be observed during the trial. This presents a problem: What constitutes a good surrogate? In theory, a good surrogate is one for which the effect of the treatment is predictive of its effect on the primary outcome, but the definition alone does not reveal how to find such a variable. More than that, to be useful in a clinical trial, the surrogate must be approved by a regulatory body when registering the trial, necessitating its interpretability. In this talk, I will discuss the implications of this, algorithms that can provably learn composite surrogates from observational data, and situations where there is no hope to find a good surrogate.

Location: Alan Turing

Organizer: Louis Ohl


Past seminars

Fall 2025        
Spring 2025 Fall 2024 Spring 2024 Fall 2023 Spring 2023
Fall 2022 Spring 2022 Spring 2021 Spring 2020 Fall 2019
Spring 2019 Fall 2018 Spring 2018 Fall 2017 Spring 2017
Fall 2016 Spring 2016 Fall 2015 Spring 2015 Fall 2014

Page responsible: Louis Ohl