DOCTORAL SEMINAR

New Advances in Bayesian Inference for Gaussian Process and Deep Gaussian Process Models

Speaker
Mr Yu Haibin
Advisor
Dr Low Kian Hsiang, Associate Professor, School of Computing


14 Oct 2019 Monday, 02:00 PM to 03:30 PM

Executive Classroom, COM2-04-02

Abstract:

Gaussian processes (GP) are a rich class of Bayesian nonparametric models that can exploit correlations of the data/observations for performing probabilistic non-linear regression by providing a Gaussian predictive distribution with formal measures of predictive uncertainty. Though highly expressive, the applicability of GP in large datasets and in hierarchical composition of GPs is severely limited by analytic and computational intractabilities. Therefore, it is crucial to develop accurate and efficient inference algorithms to address these challenges. To this end, this thesis focuses on proposing a series of novel Bayesian inference methods for a wide variety of inducing variables-based GP models, which unifies the previous literature, significantly extends them and achieves new state-of-the-art performances.

To start with, this thesis presents a unifying perspective of existing inducing variables-based GP models, sparse GP (SGP) models and variational inference for SGP models (VSGP). Then, we present a novel variational inference framework for deriving a family of Bayesian SGP regression models, referred to as variational Bayesian SGP (VBSGP) regression models, which is able to recover many existing SGP regression models given certain noise structures. We empirically evaluate the performance of VBSGP regression models on various datasets, including two real-world, massive datasets.

Next, taking into account the fact that the expressiveness of GP and SGP depends heavily on the design of the kernel function, we further extend the expressive power of GP by introducing Deep GP (DGP), which is a hierarchical composition of GP models. Unfortunately, exact inference in DGP is intractable, which has motivated the recent development of deterministic and stochastic approximation methods. However, the deterministic approximation methods yield a biased posterior belief while the stochastic one is computationally costly. In this regard, we present an implicit posterior variational inference (IPVI) framework for DGPs that can ideally recover an unbiased posterior belief and still preserve time efficiency. Inspired by generative adversarial networks, our IPVI framework casts the DGP inference problem as a two-player game in which a Nash equilibrium, interestingly, coincides with an unbiased posterior belief. This consequently inspires us to devise a best-response dynamics algorithm to search for a Nash equilibrium (i.e., an unbiased posterior belief). Empirical evaluations show that IPVI outperforms the state-of-the-art approximation methods for DGPs.

As a continuation of this thesis, a novel and interesting variational inference framework for DGP models based on the notion of Normalizing Flow (NF) is introduced, this work will be finished in the next few months and presented in the final thesis.