Keynote Speeches

Keynote speech 1

Title: Challenges in machine learning for NLP

Speaker: Prof Yue Zhang, Westlake University


In this talk, I will briefly review recent progress in the field of natural language processing and the currently dominant method. It shows strong potentials for automatic question answering but also reveals some fragility. Then starting from the question answering task, I will discuss some recent work on probing what is learned in the system, revealing some limitations in the process. I will further discuss several other pieces of evidence for such limitations, before moving on to present several challenges as a consequence. The talk concludes with these as the main issues to solve for a robust NLP system.


Yue Zhang is currently an associate professor at Westlake University. Before joining Westlake in 2018, he worked as an assistant professor at the Singapore University of Technology and Design and as a research associate at the University of Cambridge. Yue Zhang received his PhD degree from the University of Oxford in 2009 and his BEng degree from Tsinghua University, China in 2003. Yue Zhang’s research interest lies in the fundamental algorithms for NLP, syntax, semantics, information extraction, sentiment, text generation, machine translation, and dialogue systems. He serves as the action editor for Transactions of Association of Computational Linguistics (TACL), and area chairs of ACL (2021, 20, 19, 18, 17), EMNLP (2021, 20, 19, 17, 15), COLING (2018, 14) and NAACL (2021, 2019, 15). He gave several tutorials at ACL, EMNLP, and NAACL and OxML, and won awards at SemEval 2020 (best paper honorable mention), COLING 2018 (best paper), and IALP 2017 (best paper).

Keynote speech 2

Title: Recent Trends and Challenges in Speaker Recognition

Speaker: Dr. Kong Aik LEE


The pervasive penetration of intelligent systems in reaching every corner of our lives has created a huge volume of data from the interaction points between humans and sensors, among which speech data is one of the most prevalent forms. Speech signals contain rich sources of information. The time-varying patterns of speech sound carry personal traits, such as age, gender, ethnic origin, physical health condition, mental condition, emotional state, and the identity of the speaker. While the primary purpose of speech communication is to convey thoughts and ideas, para-linguistic cues have been used extensively in human communication. This talk aims to give a broad overview and recent trends in speaker recognition in a wider context of vocal information processing and para-linguistic information extraction. With the advent of big data and the resurrection of data-hungry modeling techniques such as artificial neural networks, the research focus has shifted from a more controlled scenario towards larger and more realistic speakers in the wild scenarios. Nevertheless, some open challenges remain. Topics such as domain-invariant learning, self-supervised learning, voice biometric security, and privacy may continue to drive this field forward in the future.


Kong Aik Lee is currently a Senior Scientist at the Agency for Science, Technology and Research (A*STAR), Singapore. He was a Senior Principal Researcher at the Data Science Research Laboratories, NEC Corporation, Japan, from 2018 to 2020. He received his Ph.D. degree from Nanyang Technological University, Singapore, in 2006. After which he joined the Institute for Infocomm Research, Singapore, as a Research Scientist and then a Strategic Planning Manager (concurrent appointment). He was the recipient of the Singapore IES Prestigious Engineering Achievement Award 2013 for his contribution to voice biometrics technology, the Outstanding Service Award by IEEE ICME 2020, and the 2021 A*STAR CRF (UIBR) Award. He was the Lead Guest Editor for the CSL Special Issue on “Two decades into Speaker Recognition Evaluation – are we there yet?” Currently, he serves as an Editorial Board Member for Elsevier Computer Speech and Language (2016 – present) and was an Associate Editor for IEEE/ACM Transactions on Audio, Speech, and Language Processing (2017 – 2021). He is an elected member of the IEEE Speech and Language Processing Technical Committee (2019 – 2024) and was the General Chair of the Speaker Odyssey 2020 Workshop. His research focuses on the automatic and para-linguistic analysis of speaker characteristics, ranging from speaker recognition, language and accent recognition, diarization, voice biometrics, spoofing, and countermeasure.