Room: PLB-DS05, Pam Liversidge Building


8:30-9:00 Arrivals

9:00-9:10 Welcome

9:10-10:10 Constraining Gaussian processes by variational Fourier featuresArno Solin, Aalto University [slides]

10:10-10:40 Coffee Break

10:40-11:40 Uncertainty in compositional models with application to temporal alignmentIeva Kazlauskaite, University of Bath [slides]

11:40-12:40 Active Multi-Information Source Bayesian QuadratureMaren Mahsereci, Amazon Research Cambridge [slides]

12:40-13:50 Lunch

13:50-15:00 Invariances in Gaussian processes and how to learn themMark van der Wilk ST John, PROWLER.io [slides]

15:00-15:30 Tea Break

15:30-16:30 Learning unknown forces in nonlinear models with Gaussian processes and autoregressive flowsWil Ward, University of Sheffield [slides]



Abstracts


Constraining Gaussian processes by variational Fourier features

Abstract

Gaussian processes (GPs) provide a powerful framework for extrapolation, interpolation, and noise removal in regression and classification. In this talk we consider constraining GPs to arbitrarily-shaped domains with boundary conditions. We solve a Fourier-like generalised harmonic feature representation of the GP prior in the domain of interest, which both constrains the GP and attains a low-rank representation that is used for speeding up inference. The method scales as O(nm^2) in prediction and O(m^3) in hyperparameter learning for regression, where n is the number of data points and m the number of features. Furthermore, we make use of the variational approach to allow the method to deal with non-Gaussian likelihoods. The experiments cover both simulated and empirical data in which the boundary conditions allow for inclusion of additional physical information.



Uncertainty in compositional models with application to temporal alignment

Abstract

In this talk, I will present a series of work on composite models with applications to temporal alignment of sequences. Given a set of time series sequences, the temporal alignment task consists of finding monotonic warps of the inputs (which typically correspond to time) that remove the differences in the timing of the observations. There are three intrinsic sources of ambiguity in this problem that motivate the use of probabilistic modelling. Firstly, the temporal alignment problem is ill posed there are infinitely many ways to align a finite set of sequences and we’d like to model this warping uncertainty. Secondly, the observed sequences might correspond to multiple different unknown underlying functions, hence the assignment of sequences to groups is ambiguous. Furthermore, the observed sequences are often noisy, requiring a principled way to model the observational noise. We introduce a non-parametric probabilistic model of monotonic warps and model each sequence as a composition of such a warp and a standard GP. To represent the warping uncertainty, we study the compositional uncertainty (arising from multiple different compositions of functions resulting in the same overall function) in such a two-layer model. To allow for alignment in multiple groups and to find these groups in an unsupervised manner, we use probabilistic alignment objectives (such as GP-LVM or DPMM). Finally, we discuss the requirements on the inference scheme that allows us to propagate the uncertainty through the model.



Active Multi-Information Source Bayesian Quadrature

Abstract

Gaussian processes can be used to solve intractable integrals, especially when evaluating the integrand is expensive and the number of evaluations is restricted by a budget. This general approach is usually framed as Bayesian quadrature. In the talk I will focus on situations where evaluations of the integrand may give too little information to obtain a meaningful integral estimator, but cheaper related functions (called secondary sources) of the integrand can be queried instead. I will discuss active learning strategies that select which source to query (and where) in order to best learn the integral of the primary source. The resulting algorithm is sample-efficient, can handle black-box integrands and secondary sources, and performs well even for a limited amount of simulation queries.



Invariances in Gaussian processes and how to learn them

Abstract

When learning mappings from data, knowledge about what modifications to the input leave the output unchanged can strongly improve generalisation. Exploiting these invariances is commonplace in many machine learning models, under the guise of convolutional structure or data augmentation. Choosing which invariances to use, however, is still done with humans in the loop, through trial-and-error and crossvalidation. In this talk, we will discuss how Gaussian processes can be constrained to exhibit invariances, and how this is useful for various applications. We will also show how invariances can be learned with backpropagation using tools from Bayesian model selection.



Learning unknown forces in nonlinear models with Gaussian processes and autoregressive flows

Abstract

Heterogeneous dynamical systems, where only part of the dynamics are known, are present in a wide range of applications, including population dynamics, control systems and bioinformatics. Placing a Gaussian process prior over the unknown terms, for example an input signal, allows the model to be restructured as a stochastic differential equation, a so called latent force model. This talk introduces a simulation based inference scheme for systems with unknown forces and nonlinear dynamics, using Gaussian process priors and autoregressive flows. We apply the model to nonlinear ODEs, and show how the approach can be easily adapted for for multitask learning, and problems with non Gaussian likelihoods.