Papers
Topics
Authors
Recent
Search
2000 character limit reached

Robotic Detection and Estimation of Single Scuba Diver Respiration Rate from Underwater Video

Published 24 Nov 2023 in cs.RO | (2311.14848v1)

Abstract: Human respiration rate (HRR) is an important physiological metric for diagnosing a variety of health conditions from stress levels to heart conditions. Estimation of HRR is well-studied in controlled terrestrial environments, yet robotic estimation of HRR as an indicator of diver stress in underwater for underwater human robot interaction (UHRI) scenarios is to our knowledge unexplored. We introduce a novel system for robotic estimation of HRR from underwater visual data by utilizing bubbles from exhalation cycles in scuba diving to time respiration rate. We introduce a fuzzy labeling system that utilizes audio information to label a diverse dataset of diver breathing data on which we compare four different methods for characterizing the presence of bubbles in images. Ultimately we show that our method is effective at estimating HRR by comparing the respiration rate output with human analysts.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (46)
  1. P. Grossman, “Respiration, stress, and cardiovascular function,” Psychophysiology, vol. 20, no. 3, pp. 284–300, 1983.
  2. G. Giannakakis, D. Grigoriadis, K. Giannakaki, O. Simantiraki, A. Roniotis, and M. Tsiknakis, “Review on psychological stress detection using biosignals,” IEEE Transactions on Affective Computing, vol. 13, no. 1, pp. 440–460, 2019.
  3. M. Tipton, C. Eglin, M. Gennser, and F. Golden, “Immersion deaths and deterioration in swimming performance in cold water,” The Lancet, vol. 354, no. 9179, pp. 626–629, 1999.
  4. M. A. Cretikos, R. Bellomo, K. Hillman, J. Chen, S. Finfer, and A. Flabouris, “Respiratory rate: the neglected vital sign,” Medical Journal of Australia, vol. 188, no. 11, pp. 657–659, 2008.
  5. L. P. Roppolo, A. Westfall, P. E. Pepe, L. L. Nobel, J. Cowan, J. J. Kay, and A. H. Idris, “Dispatcher assessments for agonal breathing improve detection of cardiac arrest,” Resuscitation, vol. 80, no. 7, pp. 769–772, 2009.
  6. W. M. Suess, A. B. Alexander, D. D. Smith, H. W. Sweeney, and R. J. Marion, “The effects of psychological stress on respiration: a preliminary study of anxiety and hyperventilation,” Psychophysiology, vol. 17, no. 6, pp. 535–540, 1980.
  7. A. Bricout, J. Fontecave-Jallon, D. Colas, G. Gerard, J.-L. Pépin, and P.-Y. Guméry, “Adaptive accelerometry derived respiration: Comparison with respiratory inductance plethysmography during sleep,” in 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 6714–6717, IEEE, 2019.
  8. G. Dudek, P. Giguere, C. Prahacs, S. Saunderson, J. Sattar, L.-A. Torres-Mendez, M. Jenkin, A. German, A. Hogue, A. Ripsman, et al., “Aqua: An amphibious autonomous robot,” Computer, vol. 40, no. 1, pp. 46–53, 2007.
  9. M. Quigley, K. Conley, B. P. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler, and A. Y. Ng, “ROS: An Open-Source Robot Operating System,” in ICRA Workshop on Open Source Software, 2009.
  10. R. Golpe, A. Jime, R. Carpizo, et al., “Home sleep studies in the assessment of sleep apnea/hypopnea syndrome,” Chest, vol. 122, no. 4, pp. 1156–1161, 2002.
  11. A. Aliverti, “Wearable technology: Role in respiratory health and disease,” Breathe, vol. 13, no. 2, pp. e27–e36, 2017.
  12. J. Jeong, Y. Jang, I. Lee, S. Shin, and S. Kim, “Wearable respiratory rate monitoring using piezo-resistive fabric sensor,” in 2009 World Congress on Medical Physics and Biomedical Engineering, pp. 282–284, Springer, 2009.
  13. A. Schäfer and K. W. Kratky, “Estimation of breathing rate from respiratory sinus arrhythmia: comparison of various methods,” Annals of Biomedical Engineering, vol. 36, pp. 476–485, 2008.
  14. J. Agbakwuru, “Oil/gas pipeline leak inspection and repair in underwater poor visibility conditions: challenges and perspectives,” Journal of Environmental Protection, vol. 2012, 2012.
  15. K. Kray, “Breathing-gas giant: Jess stark,” Alert Diver Magazine, vol. 2, pp. 24–25, 2023.
  16. C. Maclin, “An overview of diving, salvage, offshore pollution abatement, and underwater ship maintenance,” 1983 OCEANS, pp. 391–393, 1983.
  17. H. Van Waart, R. J. Harris, N. Gant, X. C. Vrijdag, C. J. Challen, C. Lawthaweesawat, and S. J. Mitchell, “Deep anaesthesia: the thailand cave rescue and its implications for management of the unconscious diver underwater,” Diving and Hyperbaric Medicine, vol. 50, no. 2, p. 121, 2020.
  18. L. Lapierre and D. Soetanto, “Nonlinear path-following control of an auv,” Ocean Engineering, vol. 34, no. 11-12, pp. 1734–1744, 2007.
  19. J. Yuh, “Design and control of autonomous underwater robots: A survey,” Autonomous Robots, vol. 8, pp. 7–24, 2000.
  20. M. J. Islam, Y. Xia, and J. Sattar, “Fast underwater image enhancement for improved visual perception,” IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 3227–3234, 2020.
  21. Y. Cong, C. Gu, T. Zhang, and Y. Gao, “Underwater robot sensing technology: A survey,” Fundamental Research, vol. 1, no. 3, pp. 337–345, 2021.
  22. L. Paull, S. Saeedi, M. Seto, and H. Li, “Auv navigation and localization: A review,” IEEE Journal of Oceanic Engineering, vol. 39, no. 1, pp. 131–149, 2013.
  23. B. Bingham, B. Foley, H. Singh, R. Camilli, K. Delaporta, R. Eustice, A. Mallios, D. Mindell, C. Roman, and D. Sakellariou, “Robotic tools for deep water archaeology: Surveying an ancient shipwreck with an autonomous underwater vehicle,” Journal of Field Robotics, vol. 27, no. 6, pp. 702–717, 2010.
  24. Y. R. Petillot, S. R. Reed, and J. M. Bell, “Real time auv pipeline detection and tracking using side scan sonar and multi-beam echo-sounder,” in OCEANS 2002, vol. 1, pp. 217–222, IEEE, 2002.
  25. M. Modasshir, S. Rahman, O. Youngquist, and I. Rekleitis, “Coral identification and counting with an autonomous underwater vehicle,” in 2018 IEEE International Conference on Robotics and Biomimetics (ROBIO), pp. 524–529, IEEE, 2018.
  26. A. Q. Li, A. Coskun, S. Doherty, S. Ghasemlou, A. Jagtap, M. Modasshir, S. Rahman, A. Singh, M. Xanthidis, J. O’Kane, et al., “Vision-based shipwreck mapping: on evaluating features quality and open source state estimation packages,” in 2016 OCEANS–Monterey, pp. 1–10, IEEE, 2016.
  27. M. J. Islam, M. Ho, and J. Sattar, “Understanding human motion and gestures for underwater human–robot collaboration,” Journal of Field Robotics, vol. 36, no. 5, pp. 851–873, 2019.
  28. M. Fulton, J. Hong, and J. Sattar, “Using monocular vision and human body priors for auvs to autonomously approach divers,” in 2022 International Conference on Robotics and Automation (ICRA), pp. 1076–1082, IEEE, 2022.
  29. D. Chiarella, M. Bibuli, G. Bruzzone, M. Caccia, A. Ranieri, E. Zereik, L. Marconi, and P. Cutugno, “Gesture-based language for diver-robot underwater interaction,” in OCEANS 2016–Genova, pp. 1–9, IEEE, 2015.
  30. U. Anegg, G. Dietmaier, A. Maier, F. Tomaselli, S. Gabor, K. Kallus, and F. Smolle-Jüttner, “Stress-induced hormonal and mood responses in scuba divers: A field study,” Life Sciences, vol. 70, no. 23, pp. 2721–2734, 2002.
  31. J. Lippmann and D. M. Taylor, “Scuba diving fatalities in australia 2001 to 2013: Chain of events,” Diving and Hyperbaric Medicine, vol. 50, no. 3, p. 220, 2020.
  32. M. Fulton, J. Sattar, and R. Absar, “Siren: Underwater robot-to-human communication using audio,” IEEE Robotics and Automation Letters, vol. 8, no. 10, pp. 6139–6146, 2023.
  33. A. G. Chavez, A. Ranieri, D. Chiarella, and A. Birk, “Underwater vision-based gesture recognition: A robustness validation for safe human-robot interaction,” IEEE Robotics & Automation Magazine, vol. 28, no. 3, pp. 67–78, 2021.
  34. M. Fulton, C. Edge, and J. Sattar, “Robot communication via motion: Closing the underwater human-robot interaction loop,” in 2019 International Conference on Robotics and Automation (ICRA), pp. 4660–4666, IEEE, 2019.
  35. M. Fulton, C. Edge, and J. Sattar, “Robot communication via motion: a study on modalities for robot-to-human communication in the field,” ACM Transactions on Human-Robot Interaction (THRI), vol. 11, no. 2, pp. 1–40, 2022.
  36. S. S. Enan, M. Fulton, and J. Sattar, “Robotic detection of a human-comprehensible gestural language for underwater multi-human-robot collaboration,” in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3085–3092, 2022.
  37. S. Perry, N. A. Khovanova, and I. A. Khovanov, “Control of heart rate through guided high-rate breathing,” Scientific Reports, vol. 9, no. 1, p. 1545, 2019.
  38. H. Wang and S. Zhang, “Non-contact human respiratory rate measurement under dark environments by low-light video enhancement,” Biomedical Signal Processing and Control, vol. 85, p. 104874, 2023.
  39. S. Sanyal and K. K. Nundy, “Algorithms for monitoring heart rate and respiratory rate from the video of a user’s face,” IEEE Journal of Translational Engineering in Health and Medicine, vol. 6, pp. 1–11, 2018.
  40. J. D. Jackson, “Classical electrodynamics,” 1999.
  41. M. S. B. Nesar, K. Trippe, R. Stapley, B. M. Whitaker, and B. Hill, “Improving touchless respiratory monitoring via lidar orientation and thermal imaging,” in 2022 IEEE Aerospace Conference (AERO), pp. 1–8, IEEE, 2022.
  42. B. Behroozpour, P. A. Sandborn, M. C. Wu, and B. E. Boser, “Lidar system architectures and circuits,” IEEE Communications Magazine, vol. 55, no. 10, pp. 135–142, 2017.
  43. S. Ebrahimian, A. Nahvi, M. Tashakori, H. Salmanzadeh, O. Mohseni, and T. Leppänen, “Multi-level classification of driver drowsiness by simultaneous analysis of ecg and respiration signals using deep neural networks,” International Journal of Environmental Research and Public Health, vol. 19, no. 17, p. 10736, 2022.
  44. M. Marlinge, M. Coulange, R. C. Fitzpatrick, R. Delacroix, A. Gabarre, N. Lainé, J. Cautela, P. Louge, A. Boussuges, J.-C. Rostain, et al., “Physiological stress markers during breath-hold diving and scuba diving,” Physiological Reports, vol. 7, no. 6, p. e14033, 2019.
  45. S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Computation, vol. 9, no. 8, pp. 1735–1780, 1997.
  46. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.

Summary

  • The paper introduces a robotic system that estimates a scuba diver’s respiration rate by detecting exhalation bubbles from underwater video data.
  • It employs methods like SVM, CNN, and CNN-LSTM, with CNN achieving around 97% detection accuracy in various conditions.
  • Results indicate that CNN-LSTM performs best in complex visual environments, paving the way for reliable, non-contact health monitoring in underwater operations.

Robotic Detection and Estimation of Single Scuba Diver Respiration Rate from Underwater Video

The paper, authored by Demetrious T. Kutzke and Junaed Sattar, presents a novel approach for robotic estimation of a scuba diver’s human respiration rate (HRR) using underwater visual data. The focus of the study is to utilize exhalation bubbles captured in video footage to determine the respiration cycle, given the lack of existing research on underwater HRR estimation for underwater human-robot interaction (UHRI) scenarios.

Context and Importance

HRR is a critical physiological metric linked to various health conditions such as stress, exposure to extreme environments, lung, and heart issues. Traditional HRR measurement methods rely heavily on contact-based technologies that are impractical in underwater environments due to issues like signal attenuation and requirement for direct skin contact obstructed by diving gear. Consequently, there’s a significant gap in real-time health monitoring for divers using automated systems.

Proposed Solution

The authors’ solution leverages visual data captured by an autonomous underwater vehicle (AUV) to identify and count exhalation bubbles, thereby estimating the respiration rate. This estimation method could potentially assist in mitigating risks associated with underwater operations, such as excessive gas consumption and catastrophic health failures.

Methodology

The system integrates two primary components:

  1. Bubble Detection: Identification of bubbles in video frames using binary classification methods.
  2. Respiration Rate Tracking: Calculation of time intervals between detected exhalation bubbles to estimate HRR.

To facilitate bubble detection, the authors introduce multiple methodologies including an audio signal classification system for initial labeling, and three image-based methods: a Support Vector Machine (SVM), Convolutional Neural Network (CNN), and a Convolutional Long Short-term Memory Network (CNN-LSTM).

Data Collection

Diver respiration data was collected across varied underwater environments—closed-water pools, freshwater lakes, and seawater—ensuring diverse conditions in terms of light, visibility, and diver gear. Approximately 40,000 images and several million audio samples constitute the dataset, providing a robust basis for training and evaluating the detection algorithms.

Results

The study evaluates the detection accuracy of SVM, CNN, and CNN-LSTM methods. Overall, CNN performed best in terms of classification accuracy (~97%), slightly outperforming CNN-LSTM. Despite CNN's superior detection accuracy, CNN-LSTM demonstrated better HRR estimation in more complex visual conditions, such as seawater environments.

The HRR estimation results compared against human analysts showed mixed performance. The CNN-LSTM model exhibited the lowest error in seawater conditions, highlighting its potential for capturing temporal breathing patterns. Nevertheless, errors were significant in certain freshwater videos, indicating limitations in generalization across varied underwater scenarios.

Implications

The practical implications of this research are substantial: it provides a foundation for developing AUV systems capable of enhancing diver safety through real-time monitoring and potentially improving communication efficiency by timing signal exchanges to diver inhalations. On a theoretical level, this work paves a path for further investigations into non-contact based physiological monitoring solutions in aquatic environments.

Future Directions

There are several promising directions for future research:

  1. Advanced Training Datasets: Expanding training datasets to include more realistic and diverse diving scenarios to improve algorithmic robustness.
  2. Refined Detection Algorithms: Integrating more sophisticated models and techniques to reduce false positives and enhance detection accuracy under challenging conditions.
  3. Hybrid Models: Combining visual and acoustic data more effectively to exploit their complementary strengths in reducing ambiguities.
  4. Extended Analysis: Exploring the applicability of this approach in different UHRI contexts beyond diving, such as submarine crew monitoring or autonomous underwater exploration.

In conclusion, the presented work demonstrates a significant stride towards addressing a hitherto unexplored problem in underwater robotics, offering innovative solutions with broad implications for diver safety and underwater communications.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.