Papers
Topics
Authors
Recent
Search
2000 character limit reached

FireNet U-Net: Efficient Fire Segmentation

Updated 24 January 2026
  • The paper introduces FireNet U-Net, a compact neural architecture that combines Self-ONN layers with a U-Net topology to enhance active fire segmentation.
  • It achieves over 8× reduction in FLOPs and parameter count, enabling real-time and onboard processing of Landsat-8 multispectral images.
  • The model demonstrates superior performance with a 90.2% F1-score, proving its effectiveness in detecting active fires even under challenging conditions.

FireNet U-Net is a neural network architecture for early detection and segmentation of active fires in satellite remote sensing imagery. It integrates Self-Organized Operational Neural Network (Self-ONN) layers into a compact U-Net encoder–decoder topology, achieving state-of-the-art segmentation accuracy on active-fire masks with significantly reduced parameter count and computational complexity compared to standard convolutional U-Net models (Devecioglu et al., 2023). FireNet U-Net is tailored for rapid, real-time, and potentially on-board satellite processing of multispectral imagery, particularly from sources such as Landsat-8.

1. Architectural Overview

FireNet U-Net adheres to the classical U-Net paradigm comprising a symmetric encoder and decoder with lateral skip connections for high-fidelity image segmentation. The primary innovation lies in the replacement of conventional convolutional layers with Self-ONN layers for both the encoder and decoder pathways.

  • Encoder: Five sequential encoding blocks, each performing spatial downsampling via Self-ONN layers (kernel size 5×55 \times 5, stride 2, tanh activation), with doubling of the feature dimensionality at each stage. A nominal configuration—compatible with the reported 4.3M total parameters—progresses as follows: 3 input channels to 32, 64, 128, 256, and 512 channels, with an optional additional ONN layer at the bottleneck.
  • Decoder: Five decoding blocks mirror the encoder in reverse, each utilizing transposed Self-ONN layers (kernel size 5, stride 2, except for the final layer which uses kernel 6), concatenating the corresponding encoder feature map via skip connections, and applying tanh activation. The channel dimensions contract as 512 → 256 → 128 → 64 → 32 → 1.

Lateral skip connections facilitate the reintroduction of spatial detail lost during downsampling, critical for precise mask generation in segmentation tasks.

2. Self-Organized Operational Neural Network (Self-ONN) Layers

The Self-ONN layer generalizes the linear convolutional operator by employing a truncated McLaurin (Taylor) expansion to model the nodal response. For a local patch yy, the transformation is

ψ(y)=n=0Qwnyn\psi(y) = \sum_{n=0}^{Q} w_n \cdot y^n

where w0,,wQw_0, \ldots, w_Q are learnable “McLaurin coefficients” and yny^n is the elementwise power. In all reported experiments, Q=3Q=3. This allows Self-ONN neurons to approximate a broader class of nonlinear functions than standard convolutions (which correspond to Q=1Q=1 and fixed w1w_1). The additional expressivity is exploited with joint optimization of the coefficients and bias under backpropagation, while tanh activation contains the dynamic range of intermediate feature maps.

3. Computational Efficiency

FireNet U-Net is designed for computational tractability, especially in resource-constrained environments such as on-board satellites. Direct comparison with standard U-Net variants demonstrates a substantial reduction in both parameter count and floating-point operation requirements:

Method Input Channels Parameters (M) FLOPs (G, per 256×256 image)
U-Net 9 10 34.5 ≈ 75
U-Net 9 3 34.5 ≈ 75
Operational U-Net 3 4.3 ≈ 9

The FLOPs metric is computed as

FLOPs=l2(Kl2ClinClout)HlWl\text{FLOPs} = \sum_l 2 \cdot (K_l^2 \cdot C^{\text{in}}_l \cdot C^{\text{out}}_l) \cdot H_l \cdot W_l

where KlK_l is kernel size, Clin/outC^{\text{in/out}}_l represent input/output channels, and Hl,WlH_l, W_l are spatial dimensions at layer ll. This architecture achieves an >8× FLOPs reduction compared to the classical U-Net.

4. Experimental Protocol

Dataset and Preprocessing

The operational context targets the Landsat-8 active fire dataset [22]. Each multispectral scene offers 10 bands; FireNet U-Net utilizes bands 7 (Short-Wave IR 2), 6 (Short-Wave IR 1), and 2 (Blue) as a 3-channel analog to RGB input. Each 256×256256 \times 256 patch is center-cropped or padded. Intensity normalization is defined as

XN(i,j)=2X(i,j)XminXmaxXmin1X_N(i, j) = 2 \cdot \frac{X(i, j) - X_{\min}}{X_{\max} - X_{\min}} - 1

Splits are performed at 40%/10%/50% for training/validation/testing by image count, without supplementary data augmentation.

Training Details

Training is undertaken with batch size 8 and up to 1000 epochs (early stopping based on validation). The Adam optimizer [26] is employed with an initial learning rate 10510^{-5}. While the explicit loss is not specified, the documentation suggests a weighted binary cross‐entropy LBCEL_{BCE} plus a soft-IoU loss LIoUL_{IoU} is standard:

L=αLBCE(Y,Y^)+β(1IoU(Y,Y^))L = \alpha \cdot L_{BCE}(Y, \hat{Y}) + \beta \cdot (1 - IoU(Y, \hat{Y}))

with α=1\alpha = 1 and β=1\beta = 1 as typical weights.

5. Quantitative and Qualitative Results

FireNet U-Net delivers competitive performance on the Landsat-8 active fire test set, surpassing both standard U-Net variants and a range of transfer learning (TL) convolutional architectures. Key metrics are summarized as follows:

Method Chan Precision Recall IoU F1-score Params (M)
U-Net [9] 10 84.6% 94.1% 80.3% 89.1% 34.5
U-Net [9] 3 84.2% 90.6% 77.4% 87.3% 34.5
U-Net-Light [9] 3 76.8% 93.2% 72.7% 84.2% 2.2
DenseNet121 [TL] 3 86.6% 68.1% 61.6% 76.2% 12.0
ResNet50 [TL] 3 84.2% 69.2% 61.3% 76.0% 32.5
Inception-v3 [TL] 3 84.2% 68.7% 60.9% 75.7% 29.8
MobileNet-v2 [TL] 3 82.0% 73.6% 63.4% 77.6% 8.0
Op-UNet 3 98.7% 83.1% 82.1% 90.2% 4.3

No formal statistical significance testing is reported, but the observed gap in F1-score (>12 points over transfer-learning baselines) exceeds typical 95% confidence intervals in comparable segmentation studies.

Qualitative analysis demonstrates high localizational fidelity, with robust detection in complex contexts involving partial cloud cover and small-scale flames. Performance degrades primarily in scenarios of heavy smoke occlusion or late-stage burn scars with background-like spectral signatures.

6. Application Context and Prospective Directions

FireNet U-Net is engineered for operational deployment in early wildfire detection workflows leveraging multispectral satellite platforms. Its parameter and FLOPs efficiency position it well for onboard, low-power computation and rapid imagery turn-around. The integration of Self-ONN layers addresses the expressivity limitations of purely linear convolutional encoders while enabling significant model compression—crucial for applications where bandwidth or energy consumption is constrained.

Research directions proposed include the integration of subspace support vector machines (SVMs) for one-class classification in data-scarce regimes of active fire detection. This suggests a future emphasis on downstream robustness and adaptability to atypical environmental or atmospheric conditions.

7. Summary

FireNet U-Net constitutes a compact, operationally efficient neural architecture for active fire segmentation in remote sensing images. By employing Self-ONN layers parameterized via truncated Taylor expansion, it achieves enhanced nonlinearity and learns richer features than standard convolutions, while sharply reducing model size (4.3M parameters) and computational burden (>8× fewer FLOPs than traditional U-Nets). Extensive validation on Landsat-8 datasets establishes its superiority in both F1-score (90.2%) and suitability for real-time or on-board use, marking a substantive step in the direction of practical, scalable wildfire monitoring (Devecioglu et al., 2023).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to FireNet U-Net.