PH.D DEFENCE - PUBLIC SEMINAR

Optimizing Transformer Models for Improved Inference Efficiency and Compactness

Speaker
Mr. Zhang Zining
Advisor
Dr He Bingsheng, Professor, School of Computing


22 Jul 2024 Monday, 04:00 PM to 05:30 PM

TR9, COM2 01-08

Abstract:

Balancing accuracy, latency, and model size is an ongoing challenge for deep learning researchers. Large neural networks, such as transformer models, are particularly challenging due to their high memory consumption and slow inference speeds. Current approaches to address these issues rely on techniques such as knowledge distillation, pruning, quantization, and tiling, but they often have limitations in terms of performance, accuracy, or applicability to different platforms.

In this seminar, we introduce several novel techniques that address the challenges of transformer models by leveraging reinforcement learning or scoring methods for tiling, full post-training quantization, and aggressive pruning. Our approach enables us to achieve better data locality, improved tiling of operator computations, more efficient quantization of parameters, and effective pruning of network structures, resulting in significant improvements in inference efficiency. Moreover, our techniques can adapt flexibly to different platforms and varying accuracy requirements, making them useful tools for a wide range of applications.

Our proposed techniques offer several advantages over existing solutions, including improved model efficiency, faster inference speeds, and maintained accuracy, making them promising for deploying complex transformer models like vision transformers and language transformers on resource-constrained devices. Experimental evaluations demonstrate the effectiveness of our techniques compared to existing approaches, highlighting their potential to facilitate transformative advances in various fields.

Overall, the techniques contribute significantly to solving the challenges posed by current transformer models and show that the use of reinforcement learning or scoring methods for tiling, post-training quantization, and pruning can lead to substantial gains in inference efficiency without sacrificing accuracy. Future research could explore the integration of these techniques with other optimization strategies and investigate their applicability to other types of neural networks.