Room: PLB-DS05, Pam Liversidge Building


8:00-9:00 Arrivals

9:00-9:15 WelcomeNeil Lawrence, University of Sheffield

9:15-10:15 TBAAlan Saul, University of Sheffield

10:15-10:30 Coffee Break

10:30-11:30 Probabilistic Programming with GPsDustin Tran, Open AI and Columbia University

11:30-12:30 On Bayesian model selection and model averagingAki Vehtari, Aalto University

12:30-13:30 Lunch

13:30-14:30 Implicit Models and Posterior ApproximationsRajesh Ranganath, Princeton University

14:30-15:00 Tea Break

15:00-16:00 A Unifying Framework for Sparse Gaussian Process Approximation using Power Expectation PropagationRichard Turner, University of Cambridge [slides]

16:00-17:00 Panel and Dicussion



Abstracts


Probabilistic Programming with GPs

Abstract

Probabilistic modeling is a powerful approach for analyzing empirical information. In this talk, I will provide an overview of Edward, a software library for probabilistic modeling. Formally, Edward is a probabilistic programming system built on computational graphs, supporting compositions of both models and inference for flexible experimentation. Edward is also integrated into TensorFlow, enabling large-scale experiments on multi-GPU, multi-machine environments. In particular, I will show how to apply Gaussian processes in Edward for two purposes: to build deep probabilistic models for representation learning and to build flexible variational approximations for accurate Bayesian inference.



On Bayesian model selection and model averaging

Abstract

I will provide overview of some recent advances in Bayesian model selection and model averaging in the M-open setting in which the true data-generating process is not one of the candidate models. I will discuss more specific implementation issues of these methods in case of Gaussian process models.



Implicit Models and Posterior Approximations

Abstract

Probabilistic generative models tell stories about how data were generated. These stories uncover hidden patterns (latent states) and form the basis for predictions. Traditionally, probabilistic generative models provide a score for generated samples via a tractable likelihood function. The requirement of the score limits the flexibility of these models. For example, in many physical models we can generate samples, but not compute their likelihood --- such models defined only by their sampling process are called implicit models. In the first part of the talk I will present a family of hierarchical Bayesian implicit models. The main computational task in working with probabilistic generative models is computing the distribution of the latent states given data: posterior inference. Posterior inference cast as optimization over an approximating family is variational inference. The accuracy of variational inference hinges on the expressivity of the approximating family. In the second part of this talk, I will explore the role of implicit distributions in forming variational approximations.



A Unifying Framework for Sparse Gaussian Process Approximation using Power Expectation Propagation

Abstract

The application of GPs is limited by computational and analytical intractabilities that arise when data are sufficiently numerous or when employing non-Gaussian models. A wealth of GP approximation schemes have been developed over the last 15 years to address these key limitations. Many of these schemes employ a small set of pseudo data points to summarise the actual data. We have developed a new pseudo-point approximation framework using Power Expectation Propagation (Power EP) that unifies a large number of these pseudo-point approximations. The new framework is built on standard methods for approximate inference (variational free-energy, EP and power EP methods) rather than employing approximations to the probabilistic generative model itself. In this way all of approximation is performed at `inference time' rather than at `modelling time' resolving awkward philosophical and empirical questions that trouble previous approaches. Crucially, we demonstrate that the new framework includes new pseudo-point approximation methods that outperform current approaches on regression, classification and state space modelling tasks in batch and online settings.