Papers
Topics
Authors
Recent
Search
2000 character limit reached

Personalized breath based biometric authentication with wearable multimodality

Published 29 Oct 2021 in cs.LG, cs.SD, and eess.AS | (2110.15941v1)

Abstract: Breath with nose sound features has been shown as a potential biometric in personal identification and verification. In this paper, we show that information that comes from other modalities captured by motion sensors on the chest in addition to audio features could further improve the performance. Our work is composed of three main contributions: hardware creation, dataset publication, and proposed multimodal models. To be more specific, we design new hardware which consists of an acoustic sensor to collect audio features from the nose, as well as an accelerometer and gyroscope to collect movement on the chest as a result of an individual's breathing. Using this hardware, we publish a collected dataset from a number of sessions from different volunteers, each session includes three common gestures: normal, deep, and strong breathing. Finally, we experiment with two multimodal models based on Convolutional Long Short Term Memory (CNN-LSTM) and Temporal Convolutional Networks (TCN) architectures. The results demonstrate the suitability of our new hardware for both verification and identification tasks.

Citations (7)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.