Papers
Topics
Authors
Recent
Search
2000 character limit reached

$Σ$-net: Systematic Evaluation of Iterative Deep Neural Networks for Fast Parallel MR Image Reconstruction

Published 18 Dec 2019 in eess.IV, cs.CV, and cs.LG | (1912.09278v1)

Abstract: Purpose: To systematically investigate the influence of various data consistency layers, (semi-)supervised learning and ensembling strategies, defined in a $Σ$-net, for accelerated parallel MR image reconstruction using deep learning. Theory and Methods: MR image reconstruction is formulated as learned unrolled optimization scheme with a Down-Up network as regularization and varying data consistency layers. The different architectures are split into sensitivity networks, which rely on explicit coil sensitivity maps, and parallel coil networks, which learn the combination of coils implicitly. Different content and adversarial losses, a semi-supervised fine-tuning scheme and model ensembling are investigated. Results: Evaluated on the fastMRI multicoil validation set, architectures involving raw k-space data outperform image enhancement methods significantly. Semi-supervised fine-tuning adapts to new k-space data and provides, together with reconstructions based on adversarial training, the visually most appealing results although quantitative quality metrics are reduced. The $Σ$-net ensembles the benefits from different models and achieves similar scores compared to the single state-of-the-art approaches. Conclusion: This work provides an open-source framework to perform a systematic wide-range comparison of state-of-the-art reconstruction approaches for parallel MR image reconstruction on the fastMRI knee dataset and explores the importance of data consistency. A suitable trade-off between perceptual image quality and quantitative scores are achieved with the ensembled $Σ$-net.

Citations (24)

Summary

  • The paper's main contribution is the introduction of the Σ-net architecture that systematically evaluates iterative deep neural networks for MRI reconstruction.
  • It employs unrolled optimization schemes and compares various data consistency layers (GD, PG, VS) to balance quantitative and perceptual image quality.
  • The model ensembling strategy mitigates individual network errors, achieving superior texture fidelity and robust performance at high acceleration factors.

Overview of "ΣΣ-net: Systematic Evaluation of Iterative Deep Neural Networks for Fast Parallel MR Image Reconstruction"

Introduction

Magnetic Resonance Imaging (MRI), pivotal in medical diagnostics due to its non-invasive nature and detailed imaging capabilities, suffers from lengthy acquisition times that limit its practicality in various clinical settings. Conventional acceleration techniques, such as Parallel Imaging (PI) combined with Compressed Sensing (CS), have improved acquisition speed but pose challenges related to effective regularization and hyper-parameter tuning. The advent of deep learning has proposed promising alternatives, where MRI reconstruction is framed as an inverse problem solved by learned unrolled optimization methods.

Hammernik et al. present a comprehensive evaluation of iterative deep neural networks (DNNs) designed to optimize MRI reconstruction with the introduction of the ΣΣ-net architecture. This paper emphasizes examining various data consistency layers and ensembling strategies, leveraging the publicly available fastMRI dataset, to ensure robust comparisons and reliable conclusions across diverse model configurations in MRI reconstruction.

Methodology

The core of the proposed methodology is the application of unrolled optimization schemes using Down-Up Networks (DUNs) as regularization tools within MR image reconstruction. The DUN structure, characterized by its memory-efficient down-up convolutional operations, facilitates handling large volumes of data while allowing detailed feature extraction from varying scales.

The comparison involves two principal network architectures: Sensitivity Networks (SNs), which utilize explicit coil sensitivity maps, and Parallel Coil Networks (PCNs), which implicitly learn coil combinations. These architectures undergo systematic evaluation across varying data consistency (DC) layers modeled via Gradient Descent (GD), Proximal Gradient (PG), and Variable Splitting (VS) schemes.

Parallel investigations into distinct learning paradigms incorporate both supervised learning—using content and adversarial losses—and semi-supervised fine-tuning, targeting adaptation to unseen kk-space data. Ultimately, model ensembling aggregates the advantages of multiple configurations into a unified framework denoted as ΣΣ-net.

Results

The study articulates several key findings:

  • Quantitative Performance: Reconstruction networks employing any DC layers consistently outperform image enhancement networks. The PG layer specifically provides superior results in shared parameter settings compared to GD and VS.
  • Visual Quality: Adversarial training significantly enhances texture and anatomical fidelity across the images, albeit at the expense of decreased quantitative metrics—a trade-off acknowledged in previous GAN-related studies.
  • Model Ensembling: ΣΣ-net achieves a balanced performance by mitigating individual model inaccuracies through collective averaging, demonstrating both superior texture detail and robust quantitative scores.

The statistical significance of these results is confirmed through rigorous tests, underscoring the broad applicability of supervised deep learning models in MR image reconstruction, even with high acceleration factors.

Discussion

This paper advances the understanding of deep learning's impact on MR image reconstruction by systematically enumerating the influence of model configurations and training strategies. While reconstruction networks demonstrate superior anatomical preservation compared to enhancement models, the loss of detail at high acceleration factors indicates potential limits in static MRI applications, necessitating further exploration into dynamic scenarios.

Furthermore, the paper advocates for standardized benchmarks, such as those provided by the fastMRI dataset, to enable reproducible and comparable research outcomes. It also highlights future directions, emphasizing the need for improved evaluation metrics sensitive to localized anatomical deviations.

Conclusion

Hammernik et al.'s ΣΣ-net represents a significant step forward in the utilization of deep learning for MRI reconstruction—offering not only an affine framework for understanding the interaction of network design and DC strategies but also establishing a benchmark path for future AI developments in medical imaging. The implications further suggest enhancements in training protocols and evaluation criteria, striving for optimal balance between perceptual quality and quantitative metrics in MR image applications.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.