2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

Visit the conference page.


Thanks to the advances in the technology of low-cost digital cameras and the popularity of the self-recording culture, the amount of visual data on the Internet is going to the opposite side of the available time and patience of the users. Thus, most of the uploaded videos are doomed to be forgotten and unwatched in a computer folder or website. In this work, we address the problem of creating smooth fast-forward videos without losing the relevant content. We present a new adaptive frame selection formulated as a weighted minimum reconstruction problem, which combined with a smoothing frame transition method accelerates first-person videos emphasizing the relevant segments and avoids visual discontinuities. The experiments show that our method is able to fast-forward videos to retain as much relevant information and smoothness as the state-of-the-art techniques in less time. We also present a new 80-hours multimodal (RGB-D, IMU, and GPS) dataset of first-person videos with annotations for recorder profile, frame scene, activities, interaction, and attention.

Coming Soon.

Source code (NEW!)


Supplementary Material

Dataset Page

Methodology and Visual Results


title = {A Weighted Sparse Sampling and Smoothing Frame Transition Approach for Semantic Fast-Forward First-Person Videos},
booktitle = {2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
author = {M. M. Silva and W. L. S. Ramos and J. P. K. Ferreira and F. C. Chamone and M. F. M. Campos and E. R. Nascimento},
Year = {2018},
Address = {Salt Lake City, USA},
month = {Jun.},
intype = {to appear in},
pages = {},
volume = {},
number = {},
doi = {},
ISBN = {}


We compare the proposed methodology against the following methods:


We conducted the experimental evaluation using the following datasets:


João Pedro Klock Ferreira

Undergraduate Student

Felipe Cadar Chamone

Undergraduate Student

Back to the project page.