PH.D DEFENCE - PUBLIC SEMINAR

Incremental Learning in Non-stationary Environments

Speaker
Mr Abhinit Kumar Ambastha
Advisor
Dr Leong Tze Yun, Professor, School of Computing


01 Mar 2023 Wednesday, 02:00 PM to 03:30 PM

MR20, COM3-02-59

Abstract:

Machine learning tasks include solving classification and regression problems that rely on recognizing input data patterns. Updating a classification or regression model directly using new data in a different environment would overwrite previously learned knowledge and thus need re-training from scratch. This, in turn, requires a large amount of labeled data and increases training time and costs. While some of these issues have been addressed by recent developments in transfer learning, continual learning, and domain adaptation, the problem of incrementally updating a classification task model using unlabelled or heterogeneous data remains largely unsolved. Also, updating classification models incrementally using data containing previously unseen classes has not been adequately explored. In this work, we view updating a model with new sequential data increments as a single non-stationary learning task. We explore unsupervised incremental learning methods to update deep neural network-based classifiers using small batches of unlabelled data.

We propose a series of incremental learning approaches to adapt existing models to work with new environments; our methods retain and update previously learned knowledge and do not require re-training the model from scratch or storing previously observed data.

First, we propose a homogeneous domain-focused incremental learning approach to incrementally update a classifier using unlabelled data sampled from the same feature space as the past data but with different data distributions. We learn a common latent feature space from the past and new data. To avoid storing learned samples, we learn the posterior Gaussian estimates of the training data. The goal is to update a model sequentially based on new datasets without degrading its performance.

Second, we propose a heterogeneous feature-focused incremental learning approach to address the problem of obtaining the same feature data in the new environments. We use a variational inference approach to learn a new latent feature space. The goal is to update a model sequentially for tasks with different feature spaces from the previously learned tasks or in applications where it is expensive or time-consuming to acquire similar data.

Third, we propose a general task-focused incremental learning approach to incrementally learn new classes from new unlabelled datasets with different input data distributions. We consider the possibility of the new datasets containing instances from previously unobserved classes. In such cases, it is not possible to directly apply existing methods as it will lead to the misclassification of instances from the unobserved classes. The proposed approach requires only a few labeled instances to learn unobserved classes in the new environment and can learn new tasks without adding a large number of model parameters.

We evaluate our methods on sequential problems for clinical diagnosis, sentiment analysis, and pattern recognition. We examine the performance of our methods and compare them to state-of-the-art methods in transfer learning, continual learning, and domain adaptation. The performance of our methods is comparable to supervised incremental learning approaches, which require labeled data for the model updates or storing past training data. The proposed approaches can learn a more compact knowledge representation compared to ensemble-based methods (where new parameters were added) without additional parameters. There is also minimal performance degradation on past data. Finally, we discuss the limitations and future enhancements of the proposed methods.