Papers
Topics
Authors
Recent
Search
2000 character limit reached

SSMRadNet : A Sample-wise State-Space Framework for Efficient and Ultra-Light Radar Segmentation and Object Detection

Published 11 Nov 2025 in eess.SP | (2511.08769v2)

Abstract: We introduce SSMRadNet, the first multi-scale State Space Model (SSM) based detector for Frequency Modulated Continuous Wave (FMCW) radar that sequentially processes raw ADC samples through two SSMs. One SSM learns a chirp-wise feature by sequentially processing samples from all receiver channels within one chirp, and a second SSM learns a representation of a frame by sequentially processing chirp-wise features. The latent representations of a radar frame are decoded to perform segmentation and detection tasks. Comprehensive evaluations on the RADIal dataset show SSMRadNet has 10-33x fewer parameters and 60-88x less computation (GFLOPs) while being 3.7x faster than state-of-the-art transformer and convolution-based radar detectors at competitive performance for segmentation tasks.

Summary

  • The paper introduces a novel multi-scale state-space framework that significantly reduces computational demands for radar segmentation and object detection.
  • It processes ADC samples sequentially through sample-wise and chirp-wise SSMs, capturing both intra- and inter-chirp dynamics efficiently.
  • Experimental results on RADIal and RaDICaL datasets show competitive accuracy, with over 60x computational savings and impressive segmentation scores.

Overview of SSMRadNet: A Sample-wise State-Space Framework for Radar Processing

The paper "SSMRadNet: A Sample-wise State-Space Framework for Efficient and Ultra-Light Radar Segmentation and Object Detection" (2511.08769) introduces a novel approach for processing Frequency Modulated Continuous Wave (FMCW) radar data using a multi-scale State Space Model (SSM) framework. This architecture, designed specifically for radar segmentation and object detection, significantly reduces computational demands while maintaining competitive performance metrics. The authors present SSMRadNet as a highly efficient framework that sequentially processes raw Analog-to-Digital Converter (ADC) samples through two SSMs to generate meaningful representations for segmentation and detection tasks.

Technical Approach

Motivation and Background

Traditional radar perception models rely on processing dense point clouds generated from 3D Range-Azimuth-Doppler (RAD) tensors using multiple stages of Fast Fourier Transforms (FFTs). This method, while effective, introduces significant computational overhead. In contrast, recent advances have explored learning-based approaches that operate directly on ADC cubes. These include convolutional, recurrent, and attention-based networks. However, these methods often suffer from increased complexity and computational costs.

SSMRadNet addresses these challenges by leveraging the state-space modeling paradigm, which allows for efficient processing of radar data. The proposed framework models radar data as sequences of tokens, enabling the architecture to capture long-duration dependencies through SSMs.

Architecture Design

The architecture of SSMRadNet consists of several key components:

  1. Sample-wise SSM: This component captures intra-chirp correlations by sequentially processing samples from radar receiver channels. Each chirp is processed to extract a feature vector representing range information.
  2. Chirp-wise SSM: Following intra-chirp processing, chirp-wise features are sequentially processed to capture inter-chirp dynamics such as motion and velocity, generating a comprehensive representation of the radar frame.
  3. Decoder: The latent representations obtained from SSMs are decoded to produce bird's-eye-view (BEV) occupancy maps for segmentation and detection tasks. The decoder incorporates spatial projection layers and convolutional blocks to refine the output maps. Figure 1

    Figure 1: SSMRadNet Architecture: Raw complex ADC samples from NRXN_{R_X}-channels feed into sample-SSM blocks.

Experimental Results

SSMRadNet's efficacy is demonstrated on two major radar vision datasets: RADIal and RaDICaL. The results highlight the framework's ability to achieve significant reductions in parameter count and computational demands while maintaining competitive accuracy.

  • Performance on RADIal: The model showcases an impressive reduction in computation by more than 60×60\times, with comparable performance metrics to state-of-the-art (SOTA) models. Specifically, SSMRadNet attains 0.79 mean Intersection over Union (mIoU) for segmentation, illustrating its computational efficiency.
  • Performance on RaDICaL: The model achieves a dice score of 0.996, matching the top-performing models while utilizing significantly fewer resources. Figure 2

    Figure 2: Ground truth vs. predicted drivable-space maps: showcasing high-IoU examples and challenging cases.

Implications and Future Work

The introduction of SSMs for radar processing opens a new avenue for efficient and scalable multi-task radar perception. The linear computation scaling with sequence length makes this approach particularly suitable for advanced radar systems with increased resolution and complexity.

Future research could involve integrating SSMRadNet with multi-modal data sources, such as cameras and LiDAR, to enhance perception capabilities. Additionally, exploring robustness improvements under adverse weather conditions and incorporating motion-aware tracking could further extend the framework's applicability in autonomous systems.

Conclusion

SSMRadNet is a compelling advancement in radar processing, offering an efficient alternative to conventional radar perception models. By utilizing a sample-wise state-space approach, the framework achieves substantial computational savings while maintaining high accuracy in segmentation and detection tasks across multiple datasets. This work sets a benchmark for developing lightweight, radar-specific neural architectures for real-time applications in autonomous driving and beyond.

Whiteboard

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.