CS SEMINAR

Foundations of Multisensory Artificial Intelligence

Speaker
Paul Liang, Assistant Professor, MIT Media Lab and EECS
Chaired by
Dr LING Chun Kai, Assistant Professor, School of Computing
lingck@comp.nus.edu.sg

12 Aug 2024 Monday, 12:30 PM to 01:30 PM

MR20, COM3-02-59

Abstract:

Building multisensory AI systems that learn from multiple sensory inputs such as text, speech, video, real-world sensors, wearable devices, and medical data holds great promise for impact in many scientific areas with practical benefits, such as in supporting human health and well-being, enabling multimedia content processing, and enhancing real-world autonomous agents. In this talk, I will discuss my research on the machine learning principles of multisensory intelligence, as well as practical methods for building multisensory foundation models over many modalities and tasks. In the first half, I will present a theoretical framework formalizing how modalities interact with each other to give rise to new information for a task. These interactions are the basic building blocks in all multimodal problems, and their quantification enables users to understand their multimodal datasets and design principled approaches to learn these interactions. In the second part, I will present my work in cross-modal attention and multimodal transformer architectures that now underpin many of today’s multimodal foundation models. Finally, I will discuss our collaborative efforts in scaling AI to many modalities and tasks for real-world impact on affective computing, mental health, and cancer prognosis.

Bio:

Paul Liang is an Assistant Professor at MIT Media Lab and EECS. His research advances the foundations of multisensory artificial intelligence to enhance the human experience. He is a recipient of the Siebel Scholars Award, Waibel Presidential Fellowship, Facebook PhD Fellowship, Center for ML and Health Fellowship, Rising Stars in Data Science, and 3 best paper awards. Outside of research, he received the Alan J. Perlis Graduate Student Teaching Award for instructing courses on multimodal machine learning.