In the last years, we witnessed an overwhelming growth in the ways and quantity of visual content that we consume. We are consuming visual content when we watch movies, play digital games, browse the net, and immerse in virtual reality or augmented reality using the new devices. As part of this process, people need to create visual content, but unfortunately, only a few people are talented enough to express themselves visually or have received the necessary training. In particular, many movies and advertisements rely on synthetic characters and virtual environments to create visual content. In this project, the main problem we aim to investigate is how to transfer human motion and appearance from video to video preserving motion features, body shape, and visual quality, which increase the creative possibilities of visual content.

 

Dataset


Coming soon

Code


Coming soon

Acknowledgment


The authors would like to thank CAPES, CNPq, FAPEMIG, and ATMOSPHERE PROJECT for funding different parts of this work. We also thank NVIDIA Corporation for the donation of a Titan XP GPU used in this research.

Team



Thiago Luange Gomes

PhD Candidate

João Pedro Moreira Ferreira

MSc Student

Rafael Augusto Vieira de Azevedo

Undergraduate Student

Thiago Malta Coutinho

Undergraduate Student

Renato José Martins

Post-doctoral Researcher