Publicacions

SYNCHRONIZATION OF SPEECH GENERATION FOR VIRTUAL HUMANS

This project is focused on real-time generation of synchronized visual and oral speech in a Content-To-Spech Model. Classical speech synthesis uses plain text input which is processed to obtain the most reliable phonetic transcription. By adopting the Content-To-Spech (CTS) model, which is richer than classical Text-to-Speech model, a new layer on top of the language model is proposed to connect directly the content obtained from the CTS Model with the synthesiser and the face of the Virtual Human. From this new layer, we generate accurately oral and visual speech through phonetic transcription. We also propose a new lip movement description model.



Software per navegació autònoma en espais no estructurats.

En aquest projecte es desenvolupa un sistema de navegació autònom per a robots movils terrestres basat en SLAM per tal de mapejar i guiar la navegació del robot. Es pot visionar un vídeo d'aquest projecte a l'apartat de vídeos del meu portfolio.



Telf

(034) 93-348 7507

Adreça

Carrer 22, num. 18, 2n 1a.
Castelldefels, Barcelona 08860
Catalunya