Papers
Topics
Authors
Recent
Search
2000 character limit reached

Post-Fall Floor-Occupancy Detection

Updated 1 February 2026
  • Post-fall floor-occupancy detection is a technology that leverages radar, vision, and sensor fusion to accurately identify individuals lying on the floor after a fall event.
  • Core sensing modalities include FMCW radar and RGB/RGB-D cameras, each using advanced signal processing, beamforming, and machine learning to overcome challenges like static clutter and occlusions.
  • Multimodal and privacy-preserving architectures integrate sensor data with temporal fusion techniques, enabling real-time alerts and targeted interventions in long-term care facilities.

Post-fall floor-occupancy detection refers to the automated identification of individuals lying on the floor after a fall event, using sensing modalities such as radar, RGB/RGB-D cameras, or multimodal sensor fusion. This problem is characterized by minimal subject motion—making it distinct from general activity or fall event detection—and presents significant challenges in environments with static clutter, occlusions, or privacy constraints typical of long-term care (LTC) facilities. Reliable post-fall occupancy detection is essential for minimizing missed alarms and reducing false positives, thereby enabling prompt and targeted interventions for vulnerable populations.

1. Core Sensing Modalities and System Architectures

Post-fall floor-occupancy detection is addressed through several sensor technologies and architectural paradigms, each with unique operational constraints and tradeoffs.

FMCW Radar-Based Systems

Low-cost FMCW radars, such as the Infineon XENSIV™ BGT60TR13C (1Tx–3Rx, 60 GHz) and multi-radar TI IWR1843 arrays, are widely employed for quasi-static floor-occupancy due to their non-invasive, privacy-preserving operation. Signal processing pipelines comprise:

Vision-Based Approaches

Vision-based systems leverage RGB/RGB-D cameras combined with pose estimation frameworks (e.g., MediaPipe) and classical or deep learning classifiers:

Multimodal and Privacy-Preserving Architectures

Recent frameworks employ multi-stage decision pipelines, combining wearable IMU sensor thresholds, wireless localization, robotic navigation, and on-board vision confirmation. All raw vision data processing is performed locally to preserve privacy, and federated learning is used for wearable-device classifiers (Azghadi et al., 14 Jul 2025).

2. Signal Processing and Detection Algorithms

The detection pipeline for radar-based post-fall occupancy follows a sequence of spatial and temporal filtering, beamforming, and hypothesis testing.

Spatial Processing: Beamforming

  • Vendor Digital Beamforming (DBF): Utilizes fixed phase weights for azimuth/elevation steering:

PDBF(r,θ,ϕ;k)=m=13zm(r,k)wm(θ,ϕ)P_\textrm{DBF}(r, \theta, \phi; k) = \sum_{m=1}^3 z_m(r, k) \cdot w_m(\theta, \phi)

Output is collapsed across elevation to yield a range–azimuth (RA) map (Trinh et al., 25 Jan 2026).

  • Adaptive Capon/MVDR Beamforming: Covariance-aware suppression of multipath and static clutter:

PCapon(r,θ)=1a(θ)HRr1a(θ)P_\textrm{Capon}(r, \theta) = \frac{1}{a(\theta)^\mathsf{H} R_r^{-1} a(\theta)}

with RrR_r the spatial covariance from near-zero-Doppler bins (Trinh et al., 25 Jan 2026, Trinh et al., 25 Jan 2026).

Doppler Amplification

  • RASSO: Nonlinear, invertible Doppler-domain remapping accentuates micro-Doppler associated with respiration or subtle posture adjustments:

D=sgn(f)feln2ln(1+ffe)D = \operatorname{sgn}(f) \cdot \frac{f_e}{\ln2} \ln\left(1 + \frac{|f|}{f_e}\right)

Applied prior to spatial processing, this boosts SNR and localizes static/lying targets (Trinh et al., 25 Jan 2026).

Detection: CA-CFAR and Data-Driven Models

  • CA-CFAR: 2D sliding window computes mean noise over training cells, declaring detections where signal exceeds kk times this mean. CA-CFAR parameters (guard band, training band) are tuned for desired frame-level FPR \leq 0.1 (Trinh et al., 25 Jan 2026, Trinh et al., 25 Jan 2026).
  • Morphological Filtering: Binary detection masks are postprocessed to remove speckle and enforce minimum area constraints (e.g., 12\geq12 pixels) for valid floor-occupancy (Trinh et al., 25 Jan 2026).
  • CNN and CNN-LSTM Classifiers: RA maps (single-frame or sequence) are classified using shallow 2D CNNs or sequence models, achieving macro-F1 up to 0.99 on nursing-home datasets (Trinh et al., 25 Jan 2026).

Vision-Based Temporal Fusion

  • Instantaneous prone pose (“Pose6”) is defined via:

min{θ(t),θ(t)π}<δθ,R(t)>τR\min\{| \theta(t) |, | \theta(t) - \pi | \} < \delta_\theta, \,\, R(t) > \tau_R

with θ\theta the torso vector orientation and RR the skeleton aspect ratio (Riahi et al., 17 May 2025).

  • Temporal occupancy is confirmed by requiring:
    • 3\geq 3s continuous prone pose
    • 2\geq 2s motion drop
    • Overlapping time windows
  • Cooldown logic ensures at most one alert per 5-minute interval (Riahi et al., 17 May 2025).

3. Datasets, Environments, and Evaluation Protocols

Realistic and Diverse Scenarios

  • Radar-based experiments are conducted in fully furnished LTC room reconstructions, with randomized subject/furniture placement to simulate multipath diversity. Subjects rotate among several floor postures and positions, yielding tens of thousands of frames per study (Trinh et al., 25 Jan 2026, Trinh et al., 25 Jan 2026).
  • Vision and RGB-D datasets include the public FPDS set (6,982 images across eight rooms) and the IASLAB-RGBD Fallen Person Dataset, with both staged and freely cluttered living spaces (Azghadi et al., 14 Jul 2025, Antonello et al., 2017).

Metrics and Scoring

  • Frame-Positive Rate (TPR_frame): Fraction of true occupied frames correctly detected.
  • Frame-False-Positive Rate (FPR_frame): Fraction of empty frames with any false detection.
  • Macro-F1, accuracy, precision, recall: Standard metrics, applied framewise or to clusters.
  • Area-under-Curve (AUC), SNR improvement (Δ\DeltaSNR): Used in radar evaluation to quantify detection robustness and beam sharpness, e.g., RASSO-RA increases SNR from 6.88 dB to 9.55 dB (Trinh et al., 25 Jan 2026).

4. Comparative Performance and System Tradeoffs

Quantitative Comparison Table

Method/Modality Mean F1 / Accuracy Notable Design Choices
Capon + CA-CFAR Radar (Trinh et al., 25 Jan 2026) FPR_frame \leq 0.1; TPR_frame = 0.916 MVDR replaces vendor DBF; 2D CA-CFAR
RASSO + Capon (Radar) (Trinh et al., 25 Jan 2026) F1 = 0.98–0.99 (seq), AUC = 0.981 Doppler warp; CNN(-LSTM) on RA maps
Multi-radar tracking (Shen et al., 2024) Acc = 96.3% fall det. (F1 = 0.967) Three synchronized FMCW radars, SNR-aware DBSCAN
ElderFallGuard (Vision) (Riahi et al., 17 May 2025) F1 = 1.00 (custom test set) Prone-pose + motion-drop + RF class.
YOLO-based (Robot vision) (Azghadi et al., 14 Jul 2025) Acc = 96.3% (RF) / 84.2% mAP50 (end-to-end) YOLO + feature postproc/classifier
Patch–SVM (RGB-D) (Antonello et al., 2017) SVM cluster F1 = 0.88–0.91 Supervoxels, two-stage SVM, map/multiview

Radar-based systems show high reliability in well-instrumented environments, with Capon/MVDR approaches and RASSO-based enhancement achieving statistically significant gains over vendor DBF or naive framewise detection. Vision-based pipelines achieve perfect classification on constrained datasets but are susceptible to occlusion and privacy limitations in real deployment.

5. Practical Limitations and Deployment Considerations

Radar-Specific Constraints

  • Static and Multipath Clutter: Reflections from furniture dominate when subject motion is minimal; Capon beamforming and RASSO help, but performance can degrade in extreme clutter (Trinh et al., 25 Jan 2026).
  • Angular Resolution: Three-element arrays are limited in separating targets at fine azimuths; more elements or multi-radar fusion offers improvement (Shen et al., 2024).
  • Motionlessness: No micro-Doppler implies static energy is indistinguishable from clutter; sequence-based classifiers mitigate but cannot overcome the physical absence of signal (Trinh et al., 25 Jan 2026).

Vision-Based and Multimodal Systems

  • Privacy: All on-device inference/prediction; no raw video streams leave the local network (Azghadi et al., 14 Jul 2025).
  • Lighting and Occlusion: Pure RGB-D or depth-only fusion approaches (e.g., two-stage SVM on supervoxels) remain robust to ambient illumination but can be challenged by heavy occlusion or reflective surfaces (Antonello et al., 2017).
  • Real-Time Operation: Confirmed FPS rates (e.g., 7–10 FPS for SVM pipeline on standard laptop; YOLO variants <<0.03 s inference on Jetson Orin) permit responsive alerting in active care environments.

6. Emerging Directions and Prospective Enhancements

  • Adaptive Regularization: Diagonal loading in Capon/MVDR for improved stability under low-sample statistics (Trinh et al., 25 Jan 2026).
  • Multidimensional (Range–Azimuth–Doppler) CFAR: Joint hypothesis testing across the entire RD space may suppress more false alarms due to multipath (Trinh et al., 25 Jan 2026).
  • Temporal Integration: Majority vote or rate-based fusion across multiple seconds (e.g., >80%>80\% confirmed hits over 20 s) reduces frame-level misses in quasi-static situations (Trinh et al., 25 Jan 2026).
  • Sensor Fusion: Combining radar micro-Doppler occupancy with vital sign detection (respiration, heartbeat) is expected to further improve detection specificity and reduce alarm fatigue (Trinh et al., 25 Jan 2026, Trinh et al., 25 Jan 2026).
  • Federated and Semi-supervised Learning: Local training of LSTM autoencoders on IMU or radar data, with aggregated global model weights, maintains privacy and enhances generalization to novel fall patterns (Azghadi et al., 14 Jul 2025).

7. Conclusion

Post-fall floor-occupancy detection is a multidisciplinary challenge critical to LTC, eldercare, and smart homes. Advances in adaptive radar signal processing (MVDR/Capon, RASSO), robust vision-based geometric and learning pipelines, and privacy-preserving multimodal fusion have yielded high-accuracy, real-time monitoring systems that address both practical and ethical deployment obstacles. Continued progress depends on integrated approaches that combine SNR-boosted radar, semantic visual interpretation, and context-aware temporal fusion, evaluated in realistic, cluttered settings with open benchmarks (Trinh et al., 25 Jan 2026, Trinh et al., 25 Jan 2026, Shen et al., 2024, Riahi et al., 17 May 2025, Azghadi et al., 14 Jul 2025, Antonello et al., 2017).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Post-Fall Floor-Occupancy Detection.