Papers
Topics
Authors
Recent
Search
2000 character limit reached

Egocentric Action Recognition by Video Attention and Temporal Context

Published 3 Jul 2020 in cs.CV | (2007.01883v1)

Abstract: We present the submission of Samsung AI Centre Cambridge to the CVPR2020 EPIC-Kitchens Action Recognition Challenge. In this challenge, action recognition is posed as the problem of simultaneously predicting a single verb' andnoun' class label given an input trimmed video clip. That is, a verb' and anoun' together define a compositional action' class. The challenging aspects of this real-life action recognition task include small fast moving objects, complex hand-object interactions, and occlusions. At the core of our submission is a recently-proposed spatial-temporal video attention model, calledW3' (What-Where-When') attention~\cite{perez2020knowing}. We further introduce a simple yet effective contextual learning mechanism to modelaction' class scores directly from long-term temporal behaviour based on the verb' andnoun' prediction scores. Our solution achieves strong performance on the challenge metrics without using object-specific reasoning nor extra training data. In particular, our best solution with multimodal ensemble achieves the 2${nd}$ best position for verb', and 3$^{rd}$ best fornoun' and `action' on the Seen Kitchens test set.

Citations (3)

Summary

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.