PH.D DEFENCE - PUBLIC SEMINAR

Understanding Model Explainability and Privacy for Facial Recognition Applications

Speaker
Ms.Zhao Xuejun
Advisor
Dr Brian Lim Youliang, Associate Professor, School of Computing


02 Sep 2024 Monday, 10:30 AM to 12:00 PM

via zoom

Abstract:

The use of facial data in AI applications raises privacy concerns and requires trust-building through providing explanations. 1) We argue that Explainable AI (XAI) can pose a privacy risk. While current model inversion attacks can reconstruct input data from model predictions, we have identified a opportunity in exploiting model explanations to enhance this attack. Our XAI-aware inversion attacks are improved across various architectures to accommodate dierent explanations. 2) We propose to incorporate theories in psychology for human face perception to make the evaluation of de-identified faces more human interpretable. Current de-identified faces are measured in terms of pixels and often overlooks human perception on facial features, which can undermine human interoperability. Our experiment collects human judgment on de-identified data and shows that FFCos achieves a higher correlation with human judgments. 3) We propose incorporating artistic sketch to explain facial expression. Current explanations of facial expressions focus on irrelevant details according to psychological theories in facial expression: action unit (AU). We designed Face Sketch XAI to abstract away irrelevant details and sketch AU artistically. Our findings show that Face Sketch XAI improves human interpretability of facial expressions. This research aims to facilitate the application of XAI and privacy in facial recognition applications.