Research
End of 2025, I finished my PhD in Statistics with Dr Jonas Latz and Dr Aretha Teckentrup. My research was focused on deep Gaussian processes (deep GPs) and how to apply them efficiently in Bayesian inverse problems. Such inverse problems arise, e. g., in image reconstruction.
Paper introducing STRIDE
In our latest pre-print, we introduce a framework for combining sparse GP regression with a deep GP architecture.
Sparse GP regression is a popular tool to reduce the computational burden of GP regression. Deep GPs are well-suited for modelling non-stationary and multi-scale phenomena. With STRIDE, we provide a way to train deep GPs while making use of the computational advantages of sparse GP regression. You can find our pre-print here:
https://arxiv.org/abs/2505.11355 (and code for all the experiments can be found here:
https://github.com/surbainczyk/stride)
Paper on Deep Gaussian Process Priors
Our article on our work with deep GPs for Bayesian image reconstruction has now been published!
Deep Gaussian processes are an intuitive way to model non-stationary data, such as images. That makes them a natural choice as a prior in Bayesian imaging tasks. But generating samples from the posterior is computationally challenging. We solve this by combining the stochastic PDE representation of Matérn-type GPs, rational approximation, and determinant-free MCMC.
Check out our paper here:
https://doi.org/10.1088/1361-6420/add9be (and code for all the experiments can be found here:
https://github.com/surbainczyk/deep_gp_priors)
Master’s Thesis and Paper on Risk-Averse Optimisation
My master’s thesis at TU Munich was supervised by Dr Brendan Keith and Prof Barbara Wohlmuth. During that time, I worked on an algorithm that optimises the shape of a structure under load to minimise the strain energy in the presence of uncertain parameters. Working on my thesis exposed me to a large number of topics: stochastic optimisation, adaptive sampling, risk measures, shape optimisation, finite elements, linear elasticity, sequential quadratic programming, …
The main question it all boils down to is how many samples one needs at each iteration in an SGD-type algorithm (hence “adaptive sampling”).
I handed in my master’s thesis in 2020 and stayed on for a few more months to contribute to a paper on adaptive sampling under constraints. The paper has now been published (open access!) and can be found here:
https://academic.oup.com/imajna/advance-article/doi/10.1093/imanum/drac083/6991354