Deep Multi-Task Learning for Face and Human Analysis
25 Jun 2019 Tuesday, 01:00 PM to 02:30 PM
COM2 Level 4
Executive Classroom, COM2-04-02
Face and human analysis in images is an important area in computer vision and has seen a lot of research effort and real-world applications. The objective of the face and human analysis is to automatically acquire high-level semantic information in human-centric images. Face and human analysis includes a lot of tasks such as face/human detection, face/human attribute classification, face/human parsing, etc., and it enables numerous applications such as surveillance, autonomous driving, fashion analysis and so on. Traditionally, each face and human analysis task is tackled by one tailor-made model. However, in most real-world scenarios, people are often interested in more than one tasks at a time. Thus multi-task learning based models are favorable and are attracting increasing research attention in the area of the face and human analysis.
In this thesis, we aim to achieve two objectives: (1) use multi-task deep learning based models for the tasks in the face and human analysis; (2) investigate and improve commonly used deep multi-task learning frameworks by addressing potential problems within multi-task learning. The first objective is motivated by the demand of real-world scenarios, where multiple tasks need to be performed simultaneously. Unlike traditional single-task learning models, we design one unified model to learn multiple face and human analysis tasks using the multi-task learning strategy. We demonstrate that deep multi-task learning can be used to perform the face attribute classification task and up to 40 face attributes can be classified simultaneously with one model. We also demonstrate that two challenging pixel-level classification tasks, i.e. human parsing and human instance segmentation, can be addressed within one model to achieve fine-grained human analysis in images. Built on the commonly used deep multi-task learning architecture, the second objective explores how to further leverage the mutual information among tasks within multi-task learning. To achieve this objective, we further model the interactions and relations among tasks. For task interaction modelling, we propose an integrated face analytics network to explicitly enable the interactions of multiple tasks. For task relation modelling, we propose a task relation network to leverage the similarities between tasks in multi-task learning.