Name
Technical Session - 8
Date & Time
Tuesday, November 16, 2021, 11:15 AM - 12:45 PM
Ha Nguyen Aansh Malik Clayton Mosher Manal Siddiqui
Description

 Exploring Realtime Conversational Virtual Characters

Speakers: Ha T Nguyen and Aansh Malik

  • Advancements in Artificial Intelligence (AI) such as Speech-To-Text, Language Understanding Models, Language Generation Models, and Text-To-Speech enable various types of applications, one of which is real-time conversational Virtual Characters. Building an end-to-end framework with the right AI technology components enables relatable and multi-dimensional Virtual Characters, who can naturally converse in creatively controlled domains, while consistently maintaining their state and personality in pre-determined narratives. In this work, we designed such a conversational framework with interchangeable, and loosely coupled components to support granular creative details in character performance, efficiency in mass creation of Virtual Characters, and flexibility to embrace future improvements of each component in the fields. We then evaluated the robustness and modularity of the framework by creating Melodie, a Virtual Character who is fond of music, and is a fan and promoter of the Eurovision Song Contest. With Melodie, we went through the full cycle from processing a speaker's audio signals, to generating a proper response using a Natural Language Generation model, to synthesizing the response in a character's Voice Font, to finally synchronizing the synthesized response with corresponding body and facial movements to produce a coherent and believable character performance. Testing and analyzing the implementation of Melodie brought forth areas of improvement and ethical considerations that are, and continue to be, essential to the design of our future applications involving Virtual Characters

Using Biometric Signals to Improve Storytelling

Speakers: Brian Wellner and Clayton P Mosher

  • Consumer wearable Bluetooth enabled biometric devices are becoming more reliable for collecting physiological data. They are also becoming more accessible to the average consumer. Building a technology platform with the right devices, components, and algorithms can enable content creators to gather consumer insights on how viewers are engaging with their content. Additionally, content can be created with the intention of allowing viewers to interact and control storylines by utilizing their physiological responses. This paper outlines the framework by which this type of data can be collected, what types of stimuli can be applied to insight gathering and content creation, and how this data can be processed

Pitfalls, Progress & Opportunity – Part 1

Speakers: Manal Siddiqui, Nikki Pope and Miriam Vogel

  • What are some of the defining characteristics in the Machine Learning and Artificial Intelligence space today? How do those characteristics influence or exaggerate bias or exclusion? How do we define bias in this context? What can we do to mitigate it in the design process and remove it from existing models and algorithms? What are best practices and why is this important to the future of the products and services that you produce, supply and support? Join this incredible panel of ML and AI experts as they discuss the state of the state as well as provide thought provoking possibilities for the future.
Virtual Session Link