- Speaker: Prof Woon-Seng Gan, Professor, Nanyang Technological University
- Title: Harnessing the Power of Deep Learning for Urban Sound Sensing and Noise Mitigation
Keynote Speaker
Lei Wang August 12, 2023
Speaker: Prof Woon-Seng Gan, Professor, Nanyang Technological University
Title: Harnessing the Power of Deep Learning for Urban Sound Sensing and Noise Mitigation
Biography
Woon-Seng Gan is a Professor of Audio Engineering and Director of the Smart Nation TRANS (national) Lab in the School of Electrical and Electronic Engineering at Nanyang Technological University, Singapore. He received his BEng (1st Class Hons) and Ph.D. degrees, both in Electrical and Electronic Engineering from the University of Strathclyde, the UK in 1989 and 1993, respectively. He has held several leadership positions at Nanyang Technological University, including serving as the Head of the Information Engineering Division from 2011 to 2014, and as the Director of the Centre for Info-comm Technology from 2016 to 2019. His research has been concerned with the connections between the physical world, signal processing, and sound control, which resulted in the practical demonstration and licensing of spatial audio algorithms, directional sound beams, and active noise control for headphones and open windows. He has published more than 400 international refereed journals and conference papers and has translated his research into 6 granted patents. He is a Fellow of the Audio Engineering Society (AES); a Fellow of the Institute of Engineering and Technology (IET); and selected as the IEEE Signal Processing Society Distinguished Lecturer from 2023-2024. He served as an Associate Editor of the IEEE/ACM Transaction on Audio, Speech, and Language Processing (TASLP; 2012-15). He is currently serving as a Senior Area Editor of the IEEE Signal Processing Letters (SPL, 2019-); Associate Technical Editor of the Journal of Audio Engineering Society (JAES; 2013-); Senior Editorial member of the Asia Pacific Signal and Information Processing Association Transaction on Signal and Information Processing (ATSIP; 2011-); and Associate Editor of the EURASIP Journal on Audio, Speech, and Music Processing (EJASMP, 2007-). He is also the President-elect (2023-2024) of the Asia Pacific Signal and Information Processing Association (APSIPA).
Abstract
In the digital age, the integration of sensing, processing, and sound emission into IoT devices has made their economical deployment in urban environment possible. These intelligent sound sensors, like the Audio Intelligence Monitoring at the Edge (AIME) devices deployed in Singapore, operate 24/7 and adapt to varying environmental conditions. As digital ears complementing the digital eyes of CCTV cameras, these devices provide public agencies with a wealth of aural data, enabling the development of comprehensive and effective sound mitigation policies. In this presentation, we will examine the critical requirements for intelligent sound sensing and explore how deep learning techniques can be utilized to extract meaningful information, such as noise type, dominant noise source direction, sound pressure and frequency of occurrence of the environmental noise. Additionally, we will introduce new deep-learning-based active noise control and mitigation approaches, including reducing noise from entering residential buildings and generating acoustic perfumes to mask annoyance in urban environments; and how these deep learning models can be deployed in an edge-cloud architecture. Our aim is to demonstrate how deep learning models can advance the field of acoustic sensing and noise mitigation and highlight current challenges and trends for future progress