Pathwise Conditioning and Non-Euclidean Gaussian Processes (via Zoom)

Abstract: In Gaussian processes, conditioning and computation of posterior distributions is usually done in a distributional fashion by working with finite-dimensional marginals. However, there is another way to think about conditioning: using actual random functions rather than their probability distributions. This perspective is particularly helpful in decision-theoretic settings such as Bayesian optimization, where it enables efficient computation of a wider class of acquisition functions than otherwise possible. In this talk, we describe these recent advances, and discuss their broader implications to Gaussian processes. We then present a class of Gaussian process models on graphs and manifolds, which can enable one to perform Bayesian optimization while taking into account symmetries and constraints in an intrinsic manner.

Bio: Alexander Terenin is a Postdoctoral Research Associate at the University of Cambridge. He is interested in statistical machine learning, particularly in settings where the data is not fixed, but is gathered interactively by the learning machine. This leads naturally to Gaussian processes and data-efficient interactive decision-making systems such as Bayesian optimization, to areas such as multi-armed bandits and reinforcement learning, and to techniques for incorporating inductive biases and prior information such as symmetries into machine learning models.