In the last years, we witnessed an overwhelming growth in the ways and quantity of visual content that we consume. We are consuming visual content when we watch movies, play digital games, browse the net, and immerse in virtual reality or augmented reality using the new devices. As part of this process, people need to create visual content, but unfortunately, only a few people are talented enough to express themselves visually or have received the necessary training. In particular, many movies and advertisements rely on synthetic characters and virtual environments to create visual content. In this project, the main problem we aim to investigate is how to transfer human motion and appearance from video to video preserving motion features, body shape, and visual quality, which increase the creative possibilities of visual content.
The authors would like to thank CAPES, CNPq, FAPEMIG, and ATMOSPHERE PROJECT for funding different parts of this work. We also thank NVIDIA Corporation for the donation of a Titan XP GPU used in this research.
Thiago Luange GomesPhD Candidate
João Pedro Moreira FerreiraMSc Student
Rafael Augusto Vieira de AzevedoUndergraduate Student
Thiago Malta CoutinhoUndergraduate Student
Renato José MartinsPost-doctoral Researcher