{"id":304,"date":"2022-07-19T11:29:33","date_gmt":"2022-07-19T11:29:33","guid":{"rendered":"https:\/\/www.colips.org\/conferences\/iscslp2022\/wp\/?page_id=304"},"modified":"2022-08-22T02:50:15","modified_gmt":"2022-08-22T02:50:15","slug":"keynote","status":"publish","type":"page","link":"https:\/\/www.colips.org\/conferences\/iscslp2022\/web\/keynote\/","title":{"rendered":"Keynote Speakers"},"content":{"rendered":"\t\t<div data-elementor-type=\"wp-page\" data-elementor-id=\"304\" class=\"elementor elementor-304\">\n\t\t\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-6689cb1 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"6689cb1\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-172dd03\" data-id=\"172dd03\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-5d31091 elementor-widget elementor-widget-heading\" data-id=\"5d31091\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">Plenary Speaker 1<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-7b37469 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"7b37469\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-03ed478\" data-id=\"03ed478\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-a9485b8 elementor-widget elementor-widget-heading\" data-id=\"a9485b8\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h3 class=\"elementor-heading-title elementor-size-default\">Speaker: Dr Jinyu Li, Partner Applied Science Manager, Microsoft<\/h3>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-55ee077 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"55ee077\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-137328d\" data-id=\"137328d\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-4f2ab5b elementor-widget elementor-widget-heading\" data-id=\"4f2ab5b\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h3 class=\"elementor-heading-title elementor-size-default\">Title:  Advancing end-to-end automatic speech recognition and beyond<\/h3>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-82ab504 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"82ab504\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-66 elementor-top-column elementor-element elementor-element-7140d0a\" data-id=\"7140d0a\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-80e9e2c elementor-widget elementor-widget-text-editor\" data-id=\"80e9e2c\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><span lang=\"en-US\"><strong>Biography:<\/strong>\u00a0<\/span><\/p><p><span lang=\"en-US\">Jinyu Li received the Ph.D. degree from Georgia Institute of Technology and joined Microsoft in 2008. He has led the development of deep-learning based automatic speech recognition technologies including both hybrid models and the most recent end-to-end models for Microsoft products since 2012, which enables Microsoft\u2019s success in industry with state-of-the-art speech recognition products in Cortana, Teams, Xbox, Skype, etc. Currently, he is a Partner Applied Science Manager in Microsoft, leading a team to design and improve advanced speech modeling algorithms and technologies.\u00a0 His major research interests cover several topics in speech processing, including end-to-end modeling, deep learning, acoustic modeling, speech separation, and noise robustness, etc. He is the leading author of the book \u201cRobust Automatic Speech Recognition &#8212; A Bridge to Practical Applications\u201d, Academic Press, 2015. He has been a member of IEEE Speech and Language Processing Technical Committee and has served as the area chair of ICASSP since 2017. He also served as the associate editor of IEEE\/ACM Transactions on Audio, Speech and Language Processing from 2015 to 2020. He was elected as the Industrial Distinguished Leader at Asia-Pacific Signal and Information Processing Association in 2021.<\/span><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t<div class=\"elementor-column elementor-col-33 elementor-top-column elementor-element elementor-element-98672cd\" data-id=\"98672cd\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-f953a17 elementor-widget elementor-widget-image\" data-id=\"f953a17\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t<figure class=\"wp-caption\">\n\t\t\t\t\t\t\t\t\t\t<img fetchpriority=\"high\" decoding=\"async\" width=\"734\" height=\"918\" src=\"https:\/\/www.colips.org\/conferences\/iscslp2022\/web\/wp-content\/uploads\/2022\/07\/Li-3-819x1024.jpg\" class=\"attachment-large size-large wp-image-315\" alt=\"\" srcset=\"https:\/\/www.colips.org\/conferences\/iscslp2022\/web\/wp-content\/uploads\/2022\/07\/Li-3-819x1024.jpg 819w, https:\/\/www.colips.org\/conferences\/iscslp2022\/web\/wp-content\/uploads\/2022\/07\/Li-3-240x300.jpg 240w, https:\/\/www.colips.org\/conferences\/iscslp2022\/web\/wp-content\/uploads\/2022\/07\/Li-3-768x960.jpg 768w, https:\/\/www.colips.org\/conferences\/iscslp2022\/web\/wp-content\/uploads\/2022\/07\/Li-3-1229x1536.jpg 1229w, https:\/\/www.colips.org\/conferences\/iscslp2022\/web\/wp-content\/uploads\/2022\/07\/Li-3-1638x2048.jpg 1638w, https:\/\/www.colips.org\/conferences\/iscslp2022\/web\/wp-content\/uploads\/2022\/07\/Li-3-1170x1463.jpg 1170w, https:\/\/www.colips.org\/conferences\/iscslp2022\/web\/wp-content\/uploads\/2022\/07\/Li-3-scaled.jpg 2048w\" sizes=\"(max-width: 734px) 100vw, 734px\" \/>\t\t\t\t\t\t\t\t\t\t\t<figcaption class=\"widget-image-caption wp-caption-text\">Dr. Jinyu Li<\/figcaption>\n\t\t\t\t\t\t\t\t\t\t<\/figure>\n\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-3e45524 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"3e45524\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-2e35c72\" data-id=\"2e35c72\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-729be74 elementor-widget elementor-widget-text-editor\" data-id=\"729be74\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><strong>Abstract :<\/strong><\/p><p><span lang=\"en-US\">The speech community is transitioning from hybrid modeling to end-to-end (E2E) modeling for automatic speech recognition (ASR). While E2E models achieved state-of-the-art results in most benchmarks in terms of ASR accuracy, there are lots of practical factors that affect the production model deployment decision, including low-latency streaming, leveraging text-only data, and handling overlapped speech etc. Without providing excellent solutions to all these factors, it is hard for E2E models to be widely commercialized.<\/span><\/p><p><span lang=\"en-US\">In this talk, I will overview the recent advances in E2E models with the focus on technologies addressing those challenges from the perspective of industry. To design a high-accuracy low-latency E2E model, a masking strategy was introduced into Transformer Transducer. I will discuss technologies which can leverage text-only data for general model training via pretraining and adaptation to a new domain via augmentation and factorization. Then, I will extend E2E modeling for streaming multi-talker ASR. I will also show how we go beyond ASR by extending the learning in E2E ASR into a new area like speech translation and build high-quality E2E speech translation models even without any human labeled speech translation data. Finally, I will conclude the talk with some new research opportunities we may work on.<\/span><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-876d878 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"876d878\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-6ee9710\" data-id=\"6ee9710\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-caff2d3 elementor-widget elementor-widget-heading\" data-id=\"caff2d3\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">Plenary Speaker 2<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-0d63d99 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"0d63d99\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-d1031f0\" data-id=\"d1031f0\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-c4005e2 elementor-widget elementor-widget-heading\" data-id=\"c4005e2\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h3 class=\"elementor-heading-title elementor-size-default\">Speaker: Prof Eng Siong Chng, Associate Professor, Nanyang Technological University <span style=\"font-family: var( --e-global-typography-primary-font-family ), Sans-serif;font-weight: var( --e-global-typography-primary-font-weight )\"><\/span><br><\/h3>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-2a35979 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"2a35979\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-f302348\" data-id=\"f302348\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-a4039b7 elementor-widget elementor-widget-heading\" data-id=\"a4039b7\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h3 class=\"elementor-heading-title elementor-size-default\">Title: Recent progress in code-switch Singapore English+Mandarin large vocabulary continuous speech recognition\n<\/h3>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-5c6dcf1 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"5c6dcf1\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-50 elementor-top-column elementor-element elementor-element-7b92544\" data-id=\"7b92544\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-b1bc095 elementor-widget elementor-widget-text-editor\" data-id=\"b1bc095\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><span lang=\"EN-US\"><strong>Biography:<\/strong> <\/span><\/p><p><span lang=\"EN-US\">Dr Chng Eng Siong is currently an Associate Professor in the School of Computer Science and Engineering (SCSE), Nanyang Technological University (NTU), Singapore. Prior to joining NTU in 2003, he worked in several research centers\/companies, namely: Knowles Electronics (USA), Lernout and Hauspie (Belgium), Institute of Infocomm Research (I2R, Singapore), and RIKEN (Japan) with a focus in signal processing and speech research. He received both PhD and BEng (Hons) from Edinburgh University, U.K. in 1996 and 1991 respectively. <\/span><\/p><p><span lang=\"EN-US\">His research currently focuses on speech recognition using DNN frameworks, low resources, noisy conditions, and adaptation to target domains (accent, use-cases). Additionally, he explores multilingual code-switch speech recognition such as English\/Mandarin and English\/Malay.<\/span><\/p><p><span lang=\"EN-US\">To date, he has been a Principal Investigator of research grants awarded by Alibaba, NTU-Rolls Royce, Mindef, MOE, and AStar with a total funding amount of over S$10 million under the \u201cSpeech and Language Technology Program (SLTP)\u201d at SCSE. He has graduated 17 PhD students and 10 Master&#8217;s Engineering students. His publications include 2 edited books and over 100 journal\/conference papers. He has served as the publication chair for 5 international conferences (Human-Agent Interaction 2016, INTERSPEECH 2014, APSIPA-2010, APSIPA-2011, ISCSLP-2006) and the local organizing committee in ASRU 2019.<\/span><\/p><p><span style=\"font-family: var( --e-global-typography-text-font-family ), Sans-serif; font-weight: var( --e-global-typography-text-font-weight );\">Homepage: <a href=\"https:\/\/personal.ntu.edu.sg\/aseschng\/intro1.html\">https:\/\/personal.ntu.edu.sg\/aseschng\/intro1.html<\/a><\/span><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t<div class=\"elementor-column elementor-col-50 elementor-top-column elementor-element elementor-element-1955807\" data-id=\"1955807\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-5a91693 elementor-widget elementor-widget-image\" data-id=\"5a91693\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t<figure class=\"wp-caption\">\n\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" width=\"734\" height=\"979\" src=\"https:\/\/www.colips.org\/conferences\/iscslp2022\/web\/wp-content\/uploads\/2022\/07\/Chng_Eng_Siong-768x1024.jpg\" class=\"attachment-large size-large wp-image-378\" alt=\"\" srcset=\"https:\/\/www.colips.org\/conferences\/iscslp2022\/web\/wp-content\/uploads\/2022\/07\/Chng_Eng_Siong-768x1024.jpg 768w, https:\/\/www.colips.org\/conferences\/iscslp2022\/web\/wp-content\/uploads\/2022\/07\/Chng_Eng_Siong-225x300.jpg 225w, https:\/\/www.colips.org\/conferences\/iscslp2022\/web\/wp-content\/uploads\/2022\/07\/Chng_Eng_Siong-1152x1536.jpg 1152w, https:\/\/www.colips.org\/conferences\/iscslp2022\/web\/wp-content\/uploads\/2022\/07\/Chng_Eng_Siong-1536x2048.jpg 1536w, https:\/\/www.colips.org\/conferences\/iscslp2022\/web\/wp-content\/uploads\/2022\/07\/Chng_Eng_Siong-1170x1560.jpg 1170w, https:\/\/www.colips.org\/conferences\/iscslp2022\/web\/wp-content\/uploads\/2022\/07\/Chng_Eng_Siong-scaled.jpg 1920w\" sizes=\"(max-width: 734px) 100vw, 734px\" \/>\t\t\t\t\t\t\t\t\t\t\t<figcaption class=\"widget-image-caption wp-caption-text\">Prof Eng Siong Chng <\/figcaption>\n\t\t\t\t\t\t\t\t\t\t<\/figure>\n\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-5d7004d elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"5d7004d\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-028e05b\" data-id=\"028e05b\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-f9b49f6 elementor-widget elementor-widget-text-editor\" data-id=\"f9b49f6\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><span lang=\"EN-US\"><strong>Abstract:<\/strong><\/span><\/p><p><span lang=\"EN-US\">Modern Speech recognition has a long history, stretching from the 70s. It received renewed interest and significant improvement in recognition performance due to the injection of DNN approaches into its acoustic modeling abilities in 2013, and lately transformed itself from the traditional Acoustic+Language+Decoder approach to an end-to-end system that has almost reached state-of-the-art performance.<\/span><br \/><span lang=\"EN-US\"><br \/>In this talk, we will share our experience in developing codes-switching English\/Mandarin speech recognition as well as recent advances in this field.<\/span><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-c0a7b84 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"c0a7b84\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-0ff7111\" data-id=\"0ff7111\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-0ba3318 elementor-widget elementor-widget-heading\" data-id=\"0ba3318\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">Plenary Speaker 3<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-4feb59e elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"4feb59e\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-5d937cf\" data-id=\"5d937cf\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-40e36af elementor-widget elementor-widget-heading\" data-id=\"40e36af\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h3 class=\"elementor-heading-title elementor-size-default\">Speaker:  Kate Knill, Principal Research Associate, University of Cambridge<\/h3>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-6bfe700 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"6bfe700\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-f833b3d\" data-id=\"f833b3d\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-185d233 elementor-widget elementor-widget-heading\" data-id=\"185d233\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h3 class=\"elementor-heading-title elementor-size-default\">Title:  Automated Assessment and Feedback: the Role of Spoken Grammatical Error Correction<br>\n\n<\/h3>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-241b6c8 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"241b6c8\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-50 elementor-top-column elementor-element elementor-element-1e086a3\" data-id=\"1e086a3\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-30e6330 elementor-widget elementor-widget-text-editor\" data-id=\"30e6330\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><strong>Biography:\u00a0<\/strong><\/p><p><span lang=\"EN-US\">Dr. Kate Knill is a Principal Research Associate at the Department of Engineering and the Automatic Language Teaching and Assessment Institute (ALTA), Cambridge University. She is the Principal Investigator for the ALTA Spoken Language Processing (SLP) Technology Project. Kate was sponsored by Marconi Underwater Systems Ltd for her 1st class B.Eng. (Jt. Hons) degree in Electronic Engineering and Maths at Nottingham University and a PhD in Digital Signal Processing at Imperial College. She has worked for over 25 years on spoken language processing, developing automatic speech recognition and text-to-speech synthesis systems in industry and academia. As an individual researcher and a leader of multi-disciplinary teams as Languages Manager, Nuance Communications, and Assistant Managing Director, Toshiba Research Europe Ltd, Cambridge Research Lab, she has developed speech systems for over 50 languages and dialects. Her current research focus is on applications for non-native spoken English language assessment and learning and detection of speech and language disorders. She was Secretary of the International Speech Communication Association (ISCA) (2017-2021) and is a member of ISCA, the Institution of Engineering and Technology (IET) and Institute of Electrical and Electronic Engineers (IEEE).<\/span><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t<div class=\"elementor-column elementor-col-50 elementor-top-column elementor-element elementor-element-18d2f6e\" data-id=\"18d2f6e\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-d4b8509 elementor-widget elementor-widget-image\" data-id=\"d4b8509\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t<figure class=\"wp-caption\">\n\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" width=\"734\" height=\"944\" src=\"https:\/\/www.colips.org\/conferences\/iscslp2022\/web\/wp-content\/uploads\/2022\/08\/Kate-Knill1132s1-796x1024.jpeg\" class=\"attachment-large size-large wp-image-574\" alt=\"\" srcset=\"https:\/\/www.colips.org\/conferences\/iscslp2022\/web\/wp-content\/uploads\/2022\/08\/Kate-Knill1132s1-796x1024.jpeg 796w, https:\/\/www.colips.org\/conferences\/iscslp2022\/web\/wp-content\/uploads\/2022\/08\/Kate-Knill1132s1-233x300.jpeg 233w, https:\/\/www.colips.org\/conferences\/iscslp2022\/web\/wp-content\/uploads\/2022\/08\/Kate-Knill1132s1-768x988.jpeg 768w, https:\/\/www.colips.org\/conferences\/iscslp2022\/web\/wp-content\/uploads\/2022\/08\/Kate-Knill1132s1-1194x1536.jpeg 1194w, https:\/\/www.colips.org\/conferences\/iscslp2022\/web\/wp-content\/uploads\/2022\/08\/Kate-Knill1132s1-1592x2048.jpeg 1592w, https:\/\/www.colips.org\/conferences\/iscslp2022\/web\/wp-content\/uploads\/2022\/08\/Kate-Knill1132s1-1170x1505.jpeg 1170w, https:\/\/www.colips.org\/conferences\/iscslp2022\/web\/wp-content\/uploads\/2022\/08\/Kate-Knill1132s1-scaled.jpeg 1990w\" sizes=\"(max-width: 734px) 100vw, 734px\" \/>\t\t\t\t\t\t\t\t\t\t\t<figcaption class=\"widget-image-caption wp-caption-text\">Dr Kate Knill<\/figcaption>\n\t\t\t\t\t\t\t\t\t\t<\/figure>\n\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-1f7845c elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"1f7845c\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-56e17d9\" data-id=\"56e17d9\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap\">\n\t\t\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-f23fe22 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"f23fe22\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-7f5eb8d\" data-id=\"7f5eb8d\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-9364e58 elementor-widget elementor-widget-text-editor\" data-id=\"9364e58\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><strong>Abstract:<\/strong><\/p><p>Automated assessment and feedback can support the studies of well over\u00a0one billion learners of English as a second language (L2)\u00a0worldwide. Their use for speaking skills is growing as deep learning\u00a0and the rise of mobile devices makes providing computer assisted language learning (CALL) 24\/7 increasingly feasible. One of the\u00a0key elements in second language acquisition is grammatical construction;\u00a0as a learner&#8217;s proficiency improves so does the complexity of their\u00a0grammar. Spoken Grammatical Error Correction (SGEC) is designed to\u00a0detect and correct grammatical errors in free speech. These\u00a0corrections can either be used to aid assessment of a candidate&#8217;s\u00a0ability or, more directly, as feedback to learners of the errors they\u00a0are making. Applying grammatical error correction to speech has a number of\u00a0challenges. Firstly, spoken grammar is not entirely the same as written\u00a0grammar, and speech contains disfluencies that need to be identified\u00a0and ignored. Secondly, whilst there are increasing text corpora labelled for GEC\u00a0the amount for speech is minimal. Additionally, SGEC must be run on transcriptions from ASR which will contain errors. In this talk, we will discuss how these problems can be addressed and deep learning\u00a0systems built to assess and feedback SGEC. The challenges remaining will\u00a0also be presented.<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<\/div>\n\t\t","protected":false},"excerpt":{"rendered":"<p>Plenary Speaker 1 Speaker: Dr Jinyu Li, Partner Applied Science Manager, Microsoft Title: Advancing end-to-end automatic speech recognition and beyond Biography:\u00a0 Jinyu Li received the Ph.D. degree from Georgia Institute of Technology and joined Microsoft in 2008. He has led&#8230;<br \/><a class=\"read-more-button\" href=\"https:\/\/www.colips.org\/conferences\/iscslp2022\/web\/keynote\/\">Read more<\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-304","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/www.colips.org\/conferences\/iscslp2022\/web\/wp-json\/wp\/v2\/pages\/304","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.colips.org\/conferences\/iscslp2022\/web\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/www.colips.org\/conferences\/iscslp2022\/web\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/www.colips.org\/conferences\/iscslp2022\/web\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.colips.org\/conferences\/iscslp2022\/web\/wp-json\/wp\/v2\/comments?post=304"}],"version-history":[{"count":215,"href":"https:\/\/www.colips.org\/conferences\/iscslp2022\/web\/wp-json\/wp\/v2\/pages\/304\/revisions"}],"predecessor-version":[{"id":622,"href":"https:\/\/www.colips.org\/conferences\/iscslp2022\/web\/wp-json\/wp\/v2\/pages\/304\/revisions\/622"}],"wp:attachment":[{"href":"https:\/\/www.colips.org\/conferences\/iscslp2022\/web\/wp-json\/wp\/v2\/media?parent=304"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}