Papers
Topics
Authors
Recent
Search
2000 character limit reached

Dual-Domain Reconstruction Networks

Updated 28 January 2026
  • The paper introduces dual-domain reconstruction networks that iteratively update both k-space and image domains to enforce physical data consistency.
  • These architectures utilize cascaded and parallel designs with differentiable domain transforms to effectively suppress artifacts and preserve details.
  • Empirical results demonstrate superior PSNR and SSIM performance, with theoretical guarantees ensuring stability and robustness against perturbations.

Dual-domain reconstruction networks are a class of deep learning architectures expressly designed for solving ill-posed inverse problems in medical imaging and computed tomography. These networks perform iterative or parallelized processing in both the measurement domain (k-space for MRI; sinogram/projection for CT and SPECT) and the image domain, enforcing cross-domain consistency and leveraging distinct, domain-specific priors to enhance reconstruction fidelity. The dual-domain paradigm systematically addresses the limitations of pure image-domain or measurement-domain restorers—such as incapacity to precisely enforce physical data-consistency or to effectively model nonlocal image structure—by alternately or jointly updating both representations with learned mappings, iterative data-consistency projections, or unrolled optimization steps. Comprehensive studies have demonstrated that dual-domain strategies set the benchmark for artifact suppression, feature preservation, and robustness across MRI, CT, and SPECT, often outperforming single-domain or sequential dual-domain schemes.

1. Core Dual-Domain Architectures

Dual-domain networks manifest predominantly in two architectures: alternating/cascaded form and parallel (joint) form.

Cascaded/Alternating Dual-Domain Designs

Cascaded approaches, exemplified by the W-net and WW-net frameworks, construct a chain of domain-specific U-nets or other backbones, each operating either in k-space or image space. The sequence (e.g., I→K→I→K) is optimized for the target acquisition (e.g., single-coil vs multi-coil MRI). Each stage restores one domain’s representation, then projects or transforms to the other domain for further refinement; at each step, strict or soft data-consistency is imposed by reinserting measured samples into the current k-space estimate (Souza et al., 2019).

Parallel/Joint Dual-Domain Designs

Parallel architectures, such as MD-Recon-Net and certain fusion approaches, comprise two branches—one acting on k-space, one on image data—processing in synchrony and performing inter-branch feature exchange after each domain-specific block. This allows simultaneous exploitation of domain-specific features with cross-fusion modules or learnable convex combinations of domain predictions. Parallel formation is effective when domain transitions are computationally expensive or when subtle anatomical correlations exist, as in multi-contrast or multi-coil MRI (Ran et al., 2019).

Both designs can be further extended with deep unfolding schemes (e.g., DUN-CP-PPA, LAMA-Net), which explicitly unroll partial or full alternating minimization procedures for variational dual-domain objectives, endowing the reconstruction network with theoretical convergence guarantees (Zhang et al., 7 Jan 2025, Ding et al., 2023, Ding et al., 30 Jul 2025).

2. Data-Consistency Enforcement and Domain Transforms

An essential principle in dual-domain reconstruction is rigorous enforcement of measurement data-consistency at every iterative update. This is operationalized by replacing estimated measurements at acquired indices with the true observations after each k-space block:

KDC(u)={λKpre(u)+y(u)λ+1u∈Ω Kpre(u)u∉ΩK_{DC}(u) = \begin{cases} \frac{\lambda K_{pre}(u) + y(u)}{\lambda+1} & u \in \Omega \ K_{pre}(u) & u \notin \Omega \end{cases}

where KpreK_{pre} is the predicted k-space, yy are the acquired observations, and Ω\Omega is the set of sampled indices (Chen et al., 2022).

Domain transforms—Fourier or inverse Fourier for MRI, Radon or filtered back projection for CT—are embedded as differentiable layers, enabling end-to-end backpropagation through the domain conversions. This is critical for unrolling and parallel designs, ensuring that feature gradients propagate unimpeded through the entire cascade.

Learned data-consistency modules and trainable reconstruction filters (e.g., in WNet, where the FBP filter is optimized along with the rest of the pipeline) have demonstrated significant improvements in image quality by adapting the physics layer to the particularities of the available measurements (Cheslerean-Boghiu et al., 2022).

3. Domain-specific Priors and Feature Extraction Modules

Dual-domain methods leverage domain-specific network backbones and priors to maximize the use of available information.

K-Space/Sinogram Domain

Effective k-space or sinogram domain models typically incorporate nonlocal operations (e.g., channel-wise self-attention as in K-GLIM (Gao et al., 2023)), large receptive-field CNNs, or residual dense architectures, as local convolution alone is inadequate for reconstructing missing spectral information at high undersampling rates. For example, cross-domain pooling/upsampling increases k-space neighborhood receptive field without spatial distortion (Liu et al., 2022).

Image Domain

Image-side models utilize U-Net variants, ViT/CNN hybrids, or dense residual blocks, often with parallel local detail enhancement modules (I-PLDE) that use depthwise convolutions to robustly restore texture and anatomical boundaries (Gao et al., 2023). Channel and spatial recalibration through Squeeze-and-Excitation units (SENet, Dual SENet) further sharpens feature relevance (Chen et al., 2022, Chen et al., 2023).

Auxiliary Information

In multi-contrast MRI, structural priors from short-protocol images (e.g., T1-weighted reference) are integrated as inputs to both domain branches—either concatenated, fused with attention/gating mechanisms, or used to fill missing k-space lines and thus jump-start the image domain with more anatomically plausible initializations (Zhou et al., 2020, Yang et al., 2023, Gao et al., 2024).

4. Training Paradigms and Loss Functions

Dual-domain networks typically optimize compound loss functions combining per-iteration or stage â„“2\ell_2/â„“1\ell_1 fidelity, SSIM, and domain-specific consistency penalties. For supervised learning, target images, k-space, and/or sinograms provide ground truth references; for self-supervised or unsupervised regimes (DSFormer, DDSS, DMSM, re-visible loss), the network is trained via partitioned measurement splits and cross-appearance consistency, obviating the need for fully sampled data (Zhou et al., 2022, Zhou et al., 2023, Zhang et al., 24 Mar 2025, Zhang et al., 7 Jan 2025).

Ltot=λ1  Limg+λ2  Lgrad+λ3  LPDC\mathcal{L}_{tot} = \lambda_1\;\mathcal{L}_{img} + \lambda_2\;\mathcal{L}_{grad} + \lambda_3\;\mathcal{L}_{PDC}

where Limg\mathcal{L}_{img} is appearance consistency (image domain), Lgrad\mathcal{L}_{grad} is gradient agreement, and LPDC\mathcal{L}_{PDC} ensures partition data consistency in the measurement domain.

In unrolled optimization networks (LAMA/LAMA-Net, LEARN++), residual-style proximal steps with learnable CNN regularizers in both domains are coupled with sufficient-descent and gradient-norm criteria, ensuring algorithmic convergence and stability (Ding et al., 30 Jul 2025, Ding et al., 2023, Zhang et al., 2020).

5. Empirical Performance and Benchmarks

Dual-domain methods consistently outperform their single-domain or sequential dual-domain counterparts across a range of inverse problems:

Network Test Data Metric PSNR (dB) / SSIM Reference
WW-net (IKIK) MRI (MC) NRMSE / PSNR 0.0215 / 33.5 (Souza et al., 2019)
DuDoRNet (w/ T1) MRI (T2) PSNR / SSIM 32.51 / 0.957 (Zhou et al., 2020)
DD-CSENet fastMRI Image NMSE / SSIM 2.28% / 0.998 (Chen et al., 2022)
DSFormer (self-sup) IXI PSNR / SSIM 40.31 / 0.985 (Zhou et al., 2022)
LAMA-Net CT (64-view) PSNR / SSIM 44.58 / 0.986 (Ding et al., 2023)
CAGAN CT (45-view) PSNR / SSIM 41.98 / 0.963 (Sun et al., 2022)
DMSM (MRI, self-sup) fastMRI PSNR / SSIM 39.15 / 0.976 (Zhang et al., 24 Mar 2025)

These networks not only lead quantitatively but also yield reconstructions with lower aliasing, sharper details, and greater clinical reliability, as confirmed in reader studies and signal fidelity evaluations.

6. Theoretical Guarantees and Algorithmic Stability

Recent work has introduced dual-domain networks with explicit convergence guarantees by unrolling alternating minimization algorithms with learnable residual blocks (LAMA-Net, iLAMA-Net) and rigorously smoothing nonsmooth nonconvex regularizers (Ding et al., 2023, Ding et al., 30 Jul 2025). These models empirically demonstrate unmatched robustness to structured and random perturbations and are provably convergent in the sense that all subsequential accumulation points are Clarke stationary solutions.

0∈∂cΦ(x∗,z∗)\boxed{ 0 \in \partial^c \Phi(x^*, z^*) }

where ∂c\partial^c denotes the Clarke subdifferential of the objective Φ\Phi.

This theoretical underpinning distinguishes such models from empirical black-boxes, enabling trustworthy deployment in clinical scenarios where diagnostic confidence and reliability are paramount.

7. Limitations, Extensions, and Future Directions

While dual-domain reconstruction networks dominate current performance tables, several open challenges and research directions remain:

  • Multi-coil and non-Cartesian extension: Most advanced dual-domain networks assume single-coil or Cartesian sampling. Robust integration of parallel imaging constraints and efficient NUFFT layers is required for broader adoption (Zhou et al., 2023, Gao et al., 2024).
  • Adaptive/learned data consistency and coil sensitivity: Embedding learned data projection modules or sensitivity maps can further improve performance but introduces extra complexity and potential overfitting.
  • Unified Transformer and Diffusion Models: Integration of spatial-frequency global attention and uncertainty quantification via diffusion processes or transformers is an active field, with evidence that dual-domain, physics-informed architectures substantially enhance both reconstruction quality and robustness (Zhang et al., 24 Mar 2025, Gao et al., 2023).
  • Reference-robust multi-contrast networks: The challenge of reconstructing when reference inputs are low-quality or missing is tackled by hybrid dual-domain models with dynamic reference fusion modules (AdaC2F, PaSS) (Gao et al., 2024).
  • Self-supervised and zero-shot learning: Fully self-supervised dual-domain models that eliminate dependence on paired full-data unlock practical deployment for rare or costly modalities (Zhou et al., 2022, Zhang et al., 7 Jan 2025).

A plausible implication is that further progress in dual-domain design will result from principled integration of domain-informed attention, data-adaptive consistency, and theoretically grounded unfoldings, culminating in interpretable, robust, and efficient architectures for clinical imaging and beyond.


References:

Definition Search Book Streamline Icon: https://streamlinehq.com
References (18)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Dual-Domain Reconstruction Networks.