Papers
Topics
Authors
Recent
Search
2000 character limit reached

Semantic Video Moments Retrieval at Scale: A New Task and a Baseline

Published 15 Oct 2022 in cs.CV | (2210.08389v1)

Abstract: Motivated by the increasing need of saving search effort by obtaining relevant video clips instead of whole videos, we propose a new task, named Semantic Video Moments Retrieval at scale (SVMR), which aims at finding relevant videos coupled with re-localizing the video clips in them. Instead of a simple combination of video retrieval and video re-localization, our task is more challenging because of several essential aspects. In the 1st stage, our SVMR should take into account the fact that: 1) a positive candidate long video can contain plenty of irrelevant clips which are also semantically meaningful. 2) a long video can be positive to two totally different query clips if it contains clips relevant to two queries. The 2nd re-localization stage also exhibits different assumptions from existing video re-localization tasks, which hold an assumption that the reference video must contain semantically similar segments corresponding to the query clip. Instead, in our scenario, the retrieved long video can be a false positive one due to the inaccuracy of the first stage. To address these challenges, we propose our two-stage baseline solution of candidate videos retrieval followed by a novel attention-based query-reference semantically alignment framework to re-localize target clips from candidate videos. Furthermore, we build two more appropriate benchmark datasets from the off-the-shelf ActivityNet-1.3 and HACS for a thorough evaluation of SVMR models. Extensive experiments are carried out to show that our solution outperforms several reference solutions.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

  1. Na Li 

Collections

Sign up for free to add this paper to one or more collections.