Keynote Talks

Maxine Eskenazi, Carnegie Mellon University

Short Bio
Dr Eskenazi is a Principle Systems Scientist in the Language Technologies Institute at Carnegie Mellon University. Her interests lie in intelligent agents and dialog. She is presently especially interested in assuring that the user has a major role in dialog system development and evaluation. She is a recent recipient of the ISCA Fellow award.

Title: User-centric dialog
Abstract:
Some recent research has turned from being agent-centric to being user-centric. This paradigm shift is important if we are to create systems acceptable to the general population of users. In this talk we will begin with the reasoning behind user-centric research. Then we will look at concrete ways to apply this point of view to system training and to evaluation. Then we will address user-centric strategies when dealing with a malevolent user.

Video
https://drive.google.com/file/d/1r4LJ88-Kfx7W4LKZv9lANK5L7LuCsZui/view?usp=sharing

Helen Hastie, Heriot Watt University

Short Bio: Helen Hastie is a Professor of Computer Science at Heriot-Watt University, Director of the EPSRC Centre for Doctoral Training in Robotic and Autonomous Systems at the Edinburgh Centre of Robotics, and Academic Lead for the National Robotarium, opening in 2022 in Edinburgh. She is currently PI on the UKRI Trustworthy Autonomous Systems Node on Trust and HRI theme lead for the EPSRC ORCA Hub,  and recently held a Royal Academy of Engineering/Leverhulme Senior Research Fellowship. Her field of research is multimodal and spoken dialogue systems, human-robot interaction and trustworthy autonomous systems. She was Co-ordinator of the EU project PARLANCE, has over 100 publications and has held positions on many scientific committees and advisory boards, including recently for the Scottish Government AI Strategy. 

Title: Trustworthy Interactive Robots
Abstract: Trust is a multifaceted, complex phenomenon that is not well understood when it occurs between humans, let alone between humans and robots. Robots that portray social cues, including voice, gestures and facial expressions, are key tools in researching human-robot trust, specifically how trust is established, lost and regained. In this talk, I will discuss various aspects of trust for HRI including language, social cues, embodiment, transparency, mental models and theory of mind. I will present a number of studies performed in the context of two large projects: the UKRI Trustworthy Autonomous Systems Programme, specifically the Node on Trust; and the EPSRC ORCA Hub for robotic and autonomous systems for remote hazardous environments. This work will be contextualised around the new National Robotarium opening soon in Edinburgh.

Video
https://drive.google.com/file/d/1bK7sH2-9gTH8aIhalslTUPZoZcuGCu0y/view?usp=sharing

Jinho D. Choi, Emory University

Short Bio: Dr. Choi has been active in the field of Natural Language Processing (NLP). He has presented many state-of-the-art NLP models that automatically derive various linguistic structures from plain text. These models are publicly available in the NLP Toolkit called ELIT. He has also led the Character Mining project and introduced novel machine comprehension tasks for explicit and implicit understanding in multiparty dialogue. For the application side, Dr. Choi has developed innovative Biomedical NLP models by collaborating with several medical fields such as radiology, neurology, transplant, and nursing. His latest research focuses on building the conversational AI-based chatbot called Emora that aims to be a daily companion of everyone’s life. With Emora, Dr. Choi’s team won the 1st-place at the Alexa Prize Socialbot Grand Challenge 3 that came with $500,000 cash award.

Title: Alexa Prize and Beyond: the Future of Chatbot
Abstract: Developing a robust dialogue system for open-domain conversations is challenging because it is difficult to collect large data to train deep learning models that cover a variety of topics and there is no “ground truth” way of conducting open-domain conversations that satisfies a wide range of people. Even the evaluation of dialogue management is often subjective (thus, bias), which adds another level of difficulty to enhance open-domain dialogue systems. In this talk, I will first illustrate limitations of state-of-the-art dialogue systems using the latest transformer models as well as top-ranked bots from the Alexa Prize Socialbot Grand Challenge. I will then introduce our inference-driven dialogue management framework and discuss its extension to deep learning-based dialogue models. Finally, I will present real-life applications of open-domain dialogue management that we are currently working on regarding education and healthcare.

Video
https://drive.google.com/file/d/1omAJp8JjGjX7wjaWEfZYKp37Bw9t_J94/view?usp=sharing