In the last years, we witnessed an overwhelming growth in the ways and quantity of visual content that we consume. We are consuming visual content when we watch movies, play digital games, browse the net, and immerse in virtual reality or augmented reality using the new devices. As part of this process, people need to create visual content, but unfortunately, only a few people are talented enough to express themselves visually or have received the necessary training. In particular, many movies and advertisements rely on synthetic characters and virtual environments to create visual content. In this project, the main problem we aim to investigate is how to transfer human motion and appearance from video to video preserving motion features, body shape, and visual quality, which increase the creative possibilities of visual content.



[WACV 2020] Thiago L. Gomes, Renato Martins, João Ferreira, Erickson R. Nascimento. Do As I Do: Transferring Human Motion and Appearance between Monocular Videos with Spatial and Temporal Constraints, IEEE Winter Conference on Applications of Computer Vision (WACV), 2020.
Visit the page for more information and paper access.


The authors would like to thank CAPES, CNPq, FAPEMIG, and ATMOSPHERE PROJECT for funding different parts of this work. We also thank NVIDIA Corporation for the donation of a Titan XP GPU used in this research.


Thiago Luange Gomes

PhD Student

João Pedro Moreira Ferreira

MSc Student

Guilherme Alvarenga Torres

Undergraduate Student

Rafael Augusto Vieira de Azevedo

Undergraduate Student

Thiago Martin Poppe

Undergraduate Student

Renato José Martins

Post-doctoral Researcher