Papers
Topics
Authors
Recent
Search
2000 character limit reached

HarDNet-MSEG: Efficient Biomedical Segmentation

Updated 28 December 2025
  • The paper presents a high-speed segmentation CNN, HarDNet-MSEG, achieving over 0.9 mean Dice on Kvasir-SEG at more than 80 FPS on modern GPUs.
  • Its encoder employs a low-memory HarDNet68 with selective, sparsified connectivity, while its cascaded decoder with RFB modules enhances segmentation accuracy efficiently.
  • HarDNet-MSEG has been adapted for diabetic foot ulcer segmentation using a Lawin Transformer decoder, establishing a robust performance/cost trade-off in biomedical imaging.

HarDNet-MSEG is a convolutional neural network architecture developed for high-accuracy, real-time biomedical image segmentation, targeting tasks such as polyp detection in colonoscopy images. Its design is characterized by an encoder-decoder layout, employing a HarDNet68 backbone to minimize memory traffic and maximize speed, and a lightweight, cascade-style decoder optimized for both efficiency and segmentation accuracy. HarDNet-MSEG achieves state-of-the-art performance across multiple medical image benchmarks, notably exceeding 0.9 mean Dice on Kvasir-SEG at over 80 FPS on contemporary GPUs, and has also been adapted for related challenges such as diabetic foot ulcer segmentation (Huang et al., 2021, Kendrick et al., 2023).

1. Encoder Architecture: HarDNet68 Backbone

The encoder in HarDNet-MSEG is based on HarDNet68, a "low-memory-traffic" convolutional network derived from DenseNet68. HarDNet introduces HarD blocks—a sparsified connectivity pattern where each convolutional layer connects only to a subset of previous layers. Specifically, for a block with LL layers, a layer kk receives input from the initial input X0X_0 and layers whose indices differ from kk by powers of two (i.e., Xk−1X_{k-1}, Xk−2X_{k-2}, Xk−4X_{k-4}, Xk−8X_{k-8}, etc.):

X0⊕{Xk−1,Xk−2,Xk−4,… }X_0 \oplus \{X_{k-1}, X_{k-2}, X_{k-4}, \dots\}

Each such concatenated input undergoes a 1×11 \times 1 convolution to fixed growth size gg, followed by a 3×33 \times 3 convolution (stride 1, padding 1). This structure reduces memory traffic relative to DenseNet's full dense connectivity, while selectively widening key layers to maintain accuracy (Huang et al., 2021).

The encoder follows a four-stage structure:

  • Conv-stem: 3×33 \times 3 convolutions with stride 2 and ReLU/BN activations, producing 32–64 channels at 1/4 input resolution.
  • Stages 1–4: Each stage consists of a HarD block and a transition down (1×1 convolution + stride 2), reducing spatial resolution successively to 1/32.
  • Typical HarDNet68 parameters on ImageNet are: {4,8,16,16}\{4, 8, 16, 16\} layers per stage, with channel counts increasing from approximately 64 to 1024 across stages.

2. Decoder Architecture: Cascaded Partial Decoder with RFB Modules

The decoder leverages a cascaded partial design, inspired by CPD [Wu et al., CVPR 2019], processing only the deepest three encoder feature maps (from stages 2–4). Shallow, high-resolution features are omitted to reduce computation.

The deep encoder outputs are upsampled and fused stage-wise:

  • The deepest feature map (F4F_4 at 1/32 resolution) is upsampled and element-wise multiplied with the RFB-processed F3F_3 (1/16 resolution) to yield D3D_3.
  • D3D_3 is similarly upsampled and fused with the RFB-processed F2F_2 (1/8 resolution), yielding D2D_2.
  • A final upsampling and 1×11 \times 1 convolution produce the full-resolution segmentation mask.

Receptive Field Blocks (RFBs) enlarge effective receptive field by applying parallel 3×33 \times 3 convolutions with multiple dilation rates (1, 3, 5) and a 1×11 \times 1 branch, concatenating outputs and compressing with 1×11 \times 1 convolution.

Skip connections in this design exclude the shallowest features and introduce only those from mid-to-deep encoder stages via the RFB and element-wise product (Huang et al., 2021).

3. Training, Losses, and Evaluation Metrics

HarDNet-MSEG evaluation uses standard segmentation metrics:

  • Mean Dice (mDice):

mDice=2 TP2 TP+FP+FN\mathrm{mDice} = \frac{2\,\mathrm{TP}}{2\,\mathrm{TP} + \mathrm{FP} + \mathrm{FN}}

mIoU=TPTP+FP+FN\mathrm{mIoU} = \frac{\mathrm{TP}}{\mathrm{TP} + \mathrm{FP} + \mathrm{FN}}

  • Precision: TP/(TP+FP)\mathrm{TP}/(\mathrm{TP}+\mathrm{FP})
  • Recall: TP/(TP+FN)\mathrm{TP}/(\mathrm{TP}+\mathrm{FN})
  • F2_2 score:

F2=5 Precision × Recall4 Precision+RecallF_2 = \frac{5\,\mathrm{Precision}\,\times\,\mathrm{Recall}}{4\,\mathrm{Precision} + \mathrm{Recall}}

  • Accuracy: (TP+TN)/(TP+TN+FP+FN)(\mathrm{TP}+\mathrm{TN})/(\mathrm{TP}+\mathrm{TN}+\mathrm{FP}+\mathrm{FN})

The paper does not specify an explicit loss function; weighted sums of pixel-wise cross-entropy and Dice loss are typical in the literature, but this is not confirmed for this network.

Two main training regimes are described:

  • Kvasir-SEG "Jha split":
    • Input: 512×512512 \times 512, SGD, learning rate 10−210^{-2}, 100 epochs, random rotation/flip augmentations.
  • PraNet split:
    • Input: 312×312312 \times 312, Adam, learning rate 10−410^{-4}, 100 epochs, no augmentations.

Batch size is not reported (Huang et al., 2021).

4. Performance Benchmarks and Comparative Results

HarDNet-MSEG delivers state-of-the-art results across five polyp segmentation datasets, maintaining high throughput:

Dataset mDice mIoU Inference FPS
Kvasir-SEG (512×512) 0.904 0.848 86.7
Kvasir-SEG (312×312) 0.912 0.857 88
CVC-ClinicDB 0.932 0.882 88
CVC-ColonDB 0.731 0.660 88
ETIS-Larib Polyp DB 0.677 0.613 88
EndoScene (CVC-T) 0.887 0.821 88

Compared to U-Net[ResNet34] on Kvasir-SEG, HarDNet-MSEG is both more accurate (mDice 0.904 vs. 0.876) and faster (86.7 FPS vs. 35 FPS). Across benchmarks, it outperforms prior networks such as PraNet in both accuracy and speed (Huang et al., 2021).

5. Adaptations and Extensions: DFUC 2022 and Lawin Transformer Decoder

In the Diabetic Foot Ulcer Grand Challenge 2022, a top-performing submission employed a modified HarDNet-MSEG as its backbone. Key modifications included:

  • Decoder Replacement: The original CPD+RFB decoder was substituted with a Lawin Transformer block. This module aggregates multi-scale context using large-window cross-attention.
  • Skip-connection Rerouting: Instead of fixed skip connections at all encoder stages, a subset of mid-level encoder features was selected for decoder fusion, aiming to better capture ulcer scale.
  • Input/Output Channel Balancing: Channel widths were rebalanced at encoder input/output to yield more symmetric feature maps.

The result was a network that achieved a Dice score of 0.7287 and Jaccard of 0.6252 on the DFUC 2022 test set. Morphological hole-filling and removal of small connected components were applied as post-processing. Low-level training details, loss function, and augmentations were not disclosed in the summary (Kendrick et al., 2023).

6. Impact, Adoption, and Limitations

HarDNet-MSEG established a performance/cost trade-off benchmark in polyp segmentation, providing both high mDice and low-latency inference. Its encoder design offers significant memory traffic reduction compared to DenseNet, and its partial decoder further reduces computational overhead.

The architecture’s adaptability was demonstrated in DFU segmentation, where a Lawin Transformer decoder was successfully substituted while preserving HarDNet’s encoder advantages.

Some limitations remain, notably the lack of explicit segmentation loss specification and batch size in the original publication. Detailed architectural parameters (channel/growth rates, etc.) for some adaptation scenarios have not been disclosed, including those for challenge-winning variants. A plausible implication is that proprietary enhancements to decoder or training strategy can further extend the basic framework’s reach, but transparent ablation studies are needed to quantify each modification’s effect (Huang et al., 2021, Kendrick et al., 2023).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to HarDNet-MSEG.