Papers
Topics
Authors
Recent
Search
2000 character limit reached

EgoReID Dataset: Person Re-identification in Videos Acquired by Mobile Devices with First-Person Point-of-View

Published 22 Dec 2018 in cs.CV | (1812.09570v4)

Abstract: In recent years, we have seen the performance of video-based person Re-Identification (ReID) methods have improved considerably. However, most of the work in this area has dealt with videos acquired by fixed cameras with wider field of view. Recently, widespread use of wearable cameras and recording devices such as cellphones have opened the door to interesting research in first-person Point-of-view (POV) videos (egocentric videos). Nonetheless, analysis of such videos is challenging due to factors such as poor video quality due to ego-motion, blurriness, severe changes in lighting conditions and perspective distortions. To facilitate the research towards conquering these challenges, this paper contributes a new dataset called EgoReID. The dataset is captured using 3 mobile cellphones with non-overlapping field-of-view. It contains 900 IDs and around 10,200 tracks with a total of 176,000 detections. The dataset also contains 12-sensor meta data e.g. camera orientation pitch and rotation for each video. In addition, we propose a new framework which takes advantage of both visual and sensor meta data to successfully perform Person ReID. We extend image-based re-ID method employing human body parsing trained on ten datasets to video-based re-ID. In our method, first frame level local features are extracted for each semantic region, then 3D convolutions are applied to encode the temporal information in each sequence of semantic regions. Additionally, we employ sensor meta data to predict targets' next camera and their estimated time of arrival, which considerably improves our ReID performance as it significantly reduces our search space.

Summary

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.