BodySpeech: A configurable facial and gesture animationsystem for speaking avatars
Ver/Abrir
Autor/a
Fernández Baena, Adso
Antonijoan Tresens, Marc
Montaño Aparicio, Raúl
Fusté Lleixà, Anna
Amores Fernandez, Judith
Otros/as autores/as
Universitat Ramon Llull. La Salle
Fecha de publicación
2013-07Resumen
Speaking avatars are present in many HumanComputer Interaction (HCI) applications. Their importancelies in communicative goals which entail interaction withinother avatars in virtual worlds or in marketing where theyhave become useful in customer push strategies. Generatingautomatic and plausible animations from speech cues havebecome a challenge. We present BodySpeech, an automaticsystem to generate gesture and facial animations driven byspeech. Body gestures are aligned with pitch accents andselected based on the strength relation between speech andbody gestures. Concurrently, facial animation is generatedfor lip sync, adding emphatic hints according to intonationstrength. Furthermore, we have implemented a tool foranimators. This tool enables us to modify the detection ofpitch accents and the intonation strength influence on outputanimations, allowing animators to define the activation ofgestural performances.
Tipo de documento
Objeto de conferencia
Lengua
English
Materias (CDU)
62 - Ingeniería. Tecnología
Palabras clave
Interacció persona-robot
Animació per ordinador
Páginas
7 p.
Publicado por
International Conference on Computer Graphics and Virtual Reality, Las Vegas, 22-25 of July, 2013
Este ítem aparece en la(s) siguiente(s) colección(ones)
Derechos
© L'autor/a. Tots el drets reservats