IDA Machine Learning Seminars - Fall 2025

The IDA Machine Learning Seminars is a series of research presentations given by nationally and internationally recognized researchers in the field of machine learning.


Wednesday, September 10 at 13:30, 2025
SDE Matching: Scalable Variational Inference for Stochastic Differential Equations
Christian Naesseth, University of Amsterdam

Abstract: The Latent Stochastic Differential Equation (SDE) is a powerful tool for time series and sequence modeling. However, training Latent SDEs typically relies on adjoint sensitivity methods, which depend on simulation, discretisation, and backpropagation through approximate SDE solutions, which limit scalability. In this work, we propose SDE Matching, a new simulation- and discretisation-free method for training Latent SDEs. Inspired by modern Score- and Flow Matching algorithms for learning generative dynamics, we extend these ideas to the domain of stochastic dynamics for time series and sequence modeling, eliminating the need for costly numerical simulations. Our results demonstrate that SDE Matching achieves performance comparable to adjoint sensitivity methods while drastically reducing computational complexity.

Location: Alan Turing

Organizer: Fredrik Lindsten


Thursday, November 6 at 14:00, 2025
Equivariant Neural Diffusion for Molecule Generation
Mikkel N. Schmidt, Technical University of Denmark, DTU Compute

Abstract: I present Equivariant Neural Diffusion (END) — an approach for generating 3D molecular structures that fully respects Euclidean symmetries. Unlike existing equivariant diffusion models, END introduces a learnable forward process that adapts dynamically to both time and data, while remaining equivariant to rigid transformations. This flexibility improves the generative capabilities of the model and opens up new possibilities for molecular design. Results on standard benchmarks show that END achieves competitive performance in both unconditional and conditional molecule generation tasks.

Location: John Von Neumann

Organizer: Fredrik Lindsten


Tuesday, November 11 at 13:30, 2025
An introduction to multi-marginal optimal transport – with applications to control theory (and machine learning)
Isabel Haasler, University of Uppsala

Abstract Optimal transport is a mathematical framework originally developed to study the most cost-efficient way to move mass from one distribution to another. The Schrödinger bridge problem is a stochastic version of this problem, which is essentially a maximum likelihood estimation problem between two distributions. In recent years, optimal transport and Schrödinger bridges have found powerful new applications in machine learning and control theory. In this talk I will introduce an extension of the problem to multiple distribution. We will see how such multi-marginal problems with additional structures appear in control and estimation problems of large populations of agents. Moreover, I will point to several connections to machine learning applications, in particular generative models.

Location: John Von Neumann

Organizer: Louis Ohl


Wednesday, December 04 at 13:30, 2025
HTBoost: data efficient learning via hybrid tree boosting
Paolo Giordani, BI Norwegian Business School

Abstract: We develop HTBoost (Hybrid Tree Boosting), a model designed to retain the strengths of gradient boosting machines while addressing some of their limitations. Hybrid trees may contain any mix of standard hard splits and more flexible soft splits, adopting a two-step procedure to escape the local minima in which soft splits are sometimes trapped. Additionally, the fitted values of each tree are enhanced by a nonlinear transformation. These modifications increase accuracy when approximating functions that are smooth with respect to at least some features. The efficiency gains come at the cost of much longer computing times, particularly for large samples. We consider some strategies to reduce the computing cost. The performance of HTBoost, XGBoost, and LightGBM is compared in simulations and in a few well-known datasets, where the realized accuracy gains are in line with what predicted by summary measures of smoothness. Finally, we extend the automatic treatment of missing values to smooth and hybrid trees.

Location: Alan Turing

Organizer: Louis Ohl


Past seminars

Spring 2025 Fall 2024 Spring 2024 Fall 2023 Spring 2023
Fall 2022 Spring 2022 Spring 2021 Spring 2020 Fall 2019
Spring 2019 Fall 2018 Spring 2018 Fall 2017 Spring 2017
Fall 2016 Spring 2016 Fall 2015 Spring 2015 Fall 2014

Page responsible: Louis Ohl