2020 IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)

Visit the Journal page.

Abstract

Technological advances in sensors have paved the way for digital cameras to become increasingly ubiquitous, which, in turn, led to the popularity of the self-recording culture. As a result, the amount of visual data on the Internet is moving in the opposite direction of the available time and patience of the users. Thus, most of the uploaded videos are doomed to be forgotten and unwatched stashed away in some computer folder or website. In this paper, we address the problem of creating smooth fast-forward videos without losing the relevant content. We present a new adaptive frame selection formulated as a weighted minimum reconstruction problem. Using a smoothing frame transition and filling visual gaps between segments, our approach accelerates first-person videos emphasizing the relevant segments and avoids visual discontinuities. Experiments conducted on controlled videos and also on an unconstrained dataset of First-Person Videos (FPVs) show that, when creating fast-forward videos, our method is able to retain as much relevant information and smoothness as the state-of-the-art techniques, but in less processing time.

Official Publication

Source code (NEW!)

ArXiv (NEW!)

Supplementary material: Video

Methodology and Visual Results

Citation

@ARTICLE{Silva2020tpami,
author = {M. {Silva} and W. {Ramos} and M. {Campos} and E. R. {Nascimento}},
journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence},
title = {A Sparse Sampling-based framework for Semantic Fast-Forward of First-Person Videos},
year = {2020},
volume = {},
number = {},
pages = {},
doi = {10.1109/TPAMI.2020.2983929},
ISBN = {0162-8828},
note = {accepted for publication}
}

Baselines

We compare the proposed methodology against the following methods:

Datasets

We conducted the experimental evaluation using the following datasets:

Authors


Back to the project page.