WKGM: Weight-K-space Generative Model for Parallel Imaging Reconstruction
Abstract: Deep learning based parallel imaging (PI) has made great progresses in recent years to accelerate magnetic resonance imaging (MRI). Nevertheless, it still has some limitations, such as the robustness and flexibility of existing methods have great deficiency. In this work, we propose a method to explore the k-space domain learning via robust generative modeling for flexible calibration-less PI reconstruction, coined weight-k-space generative model (WKGM). Specifically, WKGM is a generalized k-space domain model, where the k-space weighting technology and high-dimensional space augmentation design are efficiently incorporated for score-based generative model training, resulting in good and robust reconstructions. In addition, WKGM is flexible and thus can be synergistically combined with various traditional k-space PI models, which can make full use of the correlation between multi-coil data and realizecalibration-less PI. Even though our model was trained on only 500 images, experimental results with varying sampling patterns and acceleration factors demonstrate that WKGM can attain state-of-the-art reconstruction results with the well-learned k-space generative prior.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.