Keynote

Title: Model Inversion in Deep Neural Networks

Abstract
Given a machine learning model trained on a private dataset, under what circumstances, and to what extent, can an adversary reconstruct private training samples by exploiting access to the trained model? This Model Inversion (MI) problem has significant privacy implications and could pose a critical threat to machine learning models. As these models are increasingly deployed in applications involving sensitive data—such as face recognition, automatic speaker recognition, medical diagnosis, and security—it is essential to understand the potential risks posed by the unauthorized reconstruction of private training samples.

In this talk, I will discuss our work on studying MI attacks [1, 2], MI defenses [3], and MI-resilient architecture designs [4] to shed light on this critical privacy threat in modern deep neural networks.

[1] NB Nguyen, K Chandrasegaran, M Abdollahzadeh, NM Cheung. Re-thinking Model Inversion Attacks Against Deep Neural Networks. CVPR-2023.
[2] NB Nguyen, K Chandrasegaran, M Abdollahzadeh, NM Cheung. Label-Only Model Inversion Attacks via Knowledge Transfer. NeurIPS-2023.
[3] ST Ho, KJ Hao, K Chandrasegaran, NB Nguyen, NM Cheung. Model Inversion Robustness: Can Transfer Learning Help? CVPR-2024.
[4] JH Koh, ST Ho, NB Nguyen, NM Cheung. On the Vulnerability of Skip Connections to Model Inversion Attacks. ECCV-2024.

Bio
Ngai-Man (Man) Cheung is an Associate Professor with Singapore University of Technology and Design (SUTD). He receives his Ph.D. degree in Electrical Engineering from University of Southern California (USC), Los Angeles, CA. His Ph.D. research focused on image and video coding, and the work was supported in part by NASA-JPL. He was a postdoctoral researcher with the Image, Video and Multimedia Systems group at Stanford University, Stanford, CA. He was a core team member of the NRF Foundational Research Capabilities Team for AI, and an AI Advisor – Smart Nation and Digital Government Office (SNDGO) in Singapore.

His research has resulted in more than 200 papers and 14 U.S. patents granted with several pending. Two of his inventions have been licensed to companies. One of his research results has led to a SUTD spinoff on AI for healthcare. His research has also been featured in the National Artificial Intelligence Strategy.

He has received several research recognitions, including the Best Paper Finalist at the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2019, the Finalist of Super AI Leader (SAIL) Award at the World AI Conference (WAIC) 2019 at Shanghai, China.

His research interests are Signal and Image Processing, Computer Vision and AI.