Papers
Topics
Authors
Recent
Search
2000 character limit reached

DenseNet121-ViT Model

Updated 28 December 2025
  • The paper introduces a hybrid 3D DenseNet121-ViT architecture that integrates fine-grained CNN features with global self-attention for automated PAS detection.
  • It employs a DenseNet121 backbone for detailed texture analysis alongside a Vision Transformer branch to capture long-range contextual information from MRI volumes.
  • Comparative evaluation demonstrates improved accuracy and AUC over baseline models, highlighting its clinical potential and adaptability to other volumetric imaging challenges.

The DenseNet121-ViT model is a hybrid 3D deep learning architecture integrating a 3D DenseNet121 convolutional neural network (CNN) backbone with a 3D Vision Transformer (ViT) for automated detection of Placenta Accreta Spectrum (PAS) from volumetric MRI data. This model is designed to capture both fine-grained local features and global contextual information within high-dimensional medical images. The methodology, training regimen, and comparative evaluation of this model appear in "Placenta Accreta Spectrum Detection Using an MRI-based Hybrid CNN-Transformer Model" (Ali et al., 21 Dec 2025).

1. Hybrid Network Architecture

The DenseNet121-ViT model exploits architectural complementarity through parallel pipelines:

A. 3D DenseNet121 Backbone

  • Input: Single-channel T2-weighted MRI volumes, standardized to 128×128×64128\times128\times64 voxels.
  • Initial layers: 7×7×77\times7\times7 convolution (64 filters, stride 2), 3×3×33\times3\times3 max pooling (stride 2), reducing the spatial dimension stepwise.
  • Dense blocks: Four with layer counts [6,12,24,16][6,12,24,16], growth rate k=32k=32. Each layer integrates feature reuse via

xℓ=Hℓ([x0,x1,…,xℓ−1]),ℓ=1…L,x_\ell = H_\ell([x_0, x_1, \dots, x_{\ell-1}]), \qquad \ell=1\ldots L,

where Hℓ(⋅)H_\ell(\cdot): BN→ReLU→Conv(1×1×1)(1\times1\times1)→BN→ReLU→Conv(3×3×3)(3\times3\times3).

  • Transition layers: 1×1×11\times1\times1 convolution with compression θ=0.5\theta=0.5 and 2×2×22\times2\times2 average pooling.
  • Spatial/channel progression:
    • Block1: $256$ channels, 32×32×1632\times32\times16
    • Block2: $640$ channels, 16×16×816\times16\times8
    • Block3: $1408$ channels, 8×8×48\times8\times4
    • Block4: $1920$ channels, 8×8×48\times8\times4
  • Global average pooling, then a fully connected layer projects to a 128-dimensional feature embedding.

B. 3D Vision Transformer Branch

  • Patch extraction: Non-overlapping 16×16×1616\times16\times16 voxel cubes (N=256N=256 patches).
  • Patch flattening: Each P3=4096P^3=4096 elements linearly embedded to dmodel=768d_\text{model}=768.
  • Special token: Prepend a trainable [CLS][\rm CLS] token; add a learnable positional embedding Epos∈R257×768E_\text{pos}\in\mathbb{R}^{257\times 768}.
  • Transformer encoder: 12 layers, each with 12 attention heads (dk=dv=64d_k=d_v=64), MLP with $3072$ hidden units, and post-layernorm configuration.
  • Final output: Extract the [CLS][\rm CLS] token as a $768$-D embedding.

Self-attention is computed as:

Attention(Q,K,V)=softmax(QK⊤dk)V,\mathrm{Attention}(Q, K, V) = \mathrm{softmax}\left(\frac{QK^\top}{\sqrt{d_k}}\right)V,

and

Z0=[xCLS,x1,…,xN]+Epos.Z_0 = [x_{\text{CLS}}, x_1, \dots, x_N] + E_{\text{pos}}.

C. Fusion and Classification Head

  • Concatenation: The DenseNet121 (128-D) and ViT (768-D) embeddings merge to form an 896-D vector.
  • MLP head: FC(896→\rightarrow256)→ReLU→Dropout(0.5)→FC(256→\rightarrow2)→Softmax, yielding binary classification probabilities.

The model is implemented in PyTorch and modular code is provided for reproducibility (Ali et al., 21 Dec 2025).

2. Data Preparation and Training Protocol

  • Preprocessing: DICOM images are converted to NIfTI, reoriented to (H, W, D), resized to 128×128×64128\times128\times64 via cubic interpolation and zero-padding, and intensity is min-max normalized to [0,1][0,1] per scan.
  • Dataset Split: Stratified, patient-disjoint splits yield 793 training, 113 validation, and 227 test cases.
  • Class Balance: PAS minority class (196 cases) is oversampled to 597 by data augmentation: random flips, 90∘90^\circ/180°/270° rotations, and zoom ($1.1$--1.3×1.3\times).
  • Optimization: Adam optimizer, learning rate 10−410^{-4}, cross-entropy loss, ReduceLROnPlateau scheduler, batch size 8, up to 100 epochs. Dropout is 0.5 for this model (other models tuned $0.1$-$0.5$).
  • Frameworks: PyTorch, MONAI.

3. Comparative Evaluation and Ablation

Performance metrics (five-run averages):

  • Test accuracy: 84.3%±1.384.3\% \pm 1.3; best: 85.0%85.0\%
  • AUC: 0.842±0.0120.842 \pm 0.012; best: $0.862$
  • Precision: 0.790±0.0130.790 \pm 0.013
  • Recall (sensitivity): 0.842±0.0130.842 \pm 0.013
  • F1-Score: 0.808±0.0140.808 \pm 0.014
  • Peak train accuracy: 98.6%98.6\%; validation: 91.2%91.2\%

Test confusion matrix (best run):

Normal PAS
Correct 144/171 49/56

Baseline accuracy comparisons:

Architecture Accuracy (%)
DenseNet121-ViT 84.3±1.384.3 \pm 1.3
3D DenseNet121 79.5±2.079.5 \pm 2.0
3D ResNet18 79.3±1.379.3 \pm 1.3
3D ResNet18–Swin 70.0±1.970.0 \pm 1.9
3D Swin-Transformer 69.0±2.869.0 \pm 2.8
3D EfficientNet-B0 62.8±1.762.8 \pm 1.7

Statistical significance (ANOVA, post-hoc with FDR control) confirmed DenseNet121-ViT's superiority (p<0.05p<0.05) over all baselines.

4. Architectural Significance and Ablation Insights

DenseNet121-ViT leverages the strengths of both dense convolutional feature reuse and global self-attention. DenseNet121’s dense connectivity facilitates fine-grained texture analysis, capturing features such as T2-dark intraplacental bands, while ViT models long-range dependencies, identifying features like myometrial border continuity. This dual approach parallels expert radiologist reasoning: focusing on both local image cues and global anatomical context.

Ablation outcomes underline that both sub-networks are essential. Omitting the ViT branch (using only DenseNet121) results in an absolute accuracy drop of approximately 5%. Conversely, eliminating the DenseNet backbone (ViT or Swin only) decreases accuracy by about 15%, emphasizing that in volumetric medical imaging, local convolutional features are indispensable. Empirically, a naive ResNet18–Swin pairing underperforms ResNet18 alone, demonstrating that fusion strategy and architectural capacity alignment are critical.

5. Applications and Clinical Implications

This hybrid 3D CNN–ViT design is optimized for volumetric imaging tasks necessitating simultaneous extraction of lesion-local and anatomical-global patterns. A plausible implication is that this paradigm transfers to domains such as brain tumor grading, Alzheimer’s classification, and lung nodule detection, where similar dual-scale representations are crucial. Importantly, the end-to-end volumetric nature dispenses with manual segmentation, potentially streamlining radiological workflows. Consistent cross-run performance (low standard deviation) supports integration as a decision-support tool within PACS/RIS environments.

For clinical adoption, ongoing research should address generalizability across institutions and datasets and advance interpretability (e.g., attention map overlays for clinician validation).

6. Future Directions

The fusion module may be extended towards volumetric segmentation by replacing the current MLP with a transformer-based decoder, or adapted for multi-modal imaging (e.g., DWI, T1WI in addition to T2). Further, explainability enhancements are necessary to ensure clinician trust and regulatory compliance. Multi-center validation remains pivotal for robust translation.

7. Model Reproducibility

Detailed PyTorch-style pseudo-code for all major modules—including DenseNet3D121, ViT3D, and the fusion classifier—enables implementation and adaptation. All workflow stages, preprocessing steps, hyperparameter schedules, and data augmentation strategies are exhaustively specified, facilitating fair reproduction and informed modification for other 3D medical classification challenges (Ali et al., 21 Dec 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to DenseNet121-ViT Model.