PH.D DEFENCE - PUBLIC SEMINAR

Towards Human-Inspired Responsible AI

Speaker
Mr. Zhang Wencan
Advisor
Dr Brian Lim Youliang, Associate Professor, School of Computing


05 Feb 2024 Monday, 03:00 PM to 04:30 PM

MR20, COM3-02-59

Abstract

As AI-driven systems are developed to assist end-users, the need for human-AI interaction has significantly increased. However, the black-box nature of complex machine learning models makes them difficult for lay users to understand and trust. There is an urgent need to build responsible AI systems for various application contexts. Recently, eXplainable Artificial Intelligence (XAI) has taken the first step to satisfy human needs. However, these explanations are usually too complex for users to interpret and fail to satisfy the requirements of different stakeholders, especially lay users. Furthermore, it is essential to develop robust and privacy-preserving AI systems to better cater to the needs of users.

To address these issues, we advocate for the development of human-inspired responsible AI systems that can think and interpret like humans. Inspired by human cognitive theories, the first part of this thesis proposes an XAI perceptual processing framework to provide relatable explanations and explores varying usage and usefulness. The proposed solution also offers insights for creating and evaluating relatable XAI in other perception applications. The second part addresses the challenge of providing reliable explanations even under biased data, where we introduce Debiased-CAM and validate its improved truthfulness. This method provides a versatile platform for achieving robust performance and explanations under data biases. The final part tackles the privacy assessment problem by delving into human behavior, where we simulate the familiarity effect in human face recognition and uncover interpretable principles. The method not only offers valuable insights into understanding human behavior with computational methods but also raise cautions for privacy evaluation schemes. In summary, we explore several aspects related to human-inspired responsible AI systems. Our research lays the foundation for communicating AI knowledge to humans and creating more responsible AI with human intelligence.