Papers
Topics
Authors
Recent
Search
2000 character limit reached

Radial k‑Space Patching Strategy

Updated 2 February 2026
  • Radial k‑space patching is a strategy for segmenting MRI frequency data into concentric rings or angular bins that respect the nonuniform, directional properties of the signal.
  • It leverages the inherent radial sampling pattern to isolate low‐frequency contrast from high‐frequency details, thereby improving artifact correction and reconstruction quality.
  • Integration with transformer-based deep learning models enhances computational efficiency, reduces VRAM usage, and increases robustness under high acceleration factors.

Radial k-space patching strategies refer to methodologies for segmenting and representing raw k-space data from radial MRI acquisitions in ways that optimally respect the physical, statistical, and geometric properties of the frequency domain. Unlike Cartesian k-space sampling, radial strategies must account for the global, non-uniform, and directionally structured distribution of signal energy and phase, characteristics intrinsic to the acquisition and reconstruction of MRI data. These strategies form the basis for both classical artifact correction as well as state-of-the-art deep learning models that process data directly in k-space, without requiring traditional image reconstruction.

1. Motivation: Non-Cartesian Structure of k-Space

In MRI, raw measurement data are acquired in the Fourier domain, known as k-space. Every k-space sample (kx,ky)(k_x, k_y) encodes global information that is distributed across the entire spatial image. This non-locality is accentuated in radial sampling, where the sampling trajectory passes through the center (low-frequency, high-SNR region) multiple times with "spokes" oriented at varying angles. The natural decay of signal energy away from the center, with low frequencies encoding image contrast and global structure and high frequencies encoding fine detail, renders naive patching strategies—such as dividing the k-space into Cartesian grid squares—physically and statistically suboptimal. Cartesian grid patches arbitrarily mix low- and high-frequency content, ignore the radial decay of energy, and fail to model the inherent angular structure of radial data (Rempe et al., 26 Jan 2026).

2. Formal Definitions of Radial Patching Schemes

Radial k-space patching strategies can be categorized into two main classes:

  1. Radial ring partitioning: Used for 2D or stack-of-2D Cartesianized k-space arrays, this scheme divides k-space into concentric rings (or annuli) of constant or approximately constant area.
  • For a complex-valued H×WH \times W k-space grid, each coordinate (x,y){H/2,,H/2}×{W/2,,W/2}(x, y) \in \{-\lfloor H/2\rfloor, \dotsc, \lfloor H/2\rfloor\} \times \{-\lfloor W/2\rfloor, \dotsc, \lfloor W/2\rfloor\} is assigned a radial coordinate r(x,y)=x2+y2r(x, y) = \sqrt{x^2 + y^2}.
  • The full set of radii over the HWH \cdot W samples is sorted to form a sequence RR.
  • For a chosen patch size PP, the number of rings is N=(HW)/PN = (H \cdot W) / P.
  • The iith ring is defined as Pi={(x,y)  rir(x,y)<ri+1}P_i = \{(x, y)\ |\ r_i \le r(x, y) < r_{i+1}\}, ensuring each ring contains PP samples (Rempe et al., 26 Jan 2026).
  1. Angular binning (radial spoke segmentation): Used in raw trajectory-based settings, especially in reconstruction or artifact correction tasks.
  • The set {s1,,sM}\{s_1, \dots, s_M\} of MM radial spokes, each at angle θm\theta_m, is partitioned into NN contiguous angular bins: Si={m  θm[θi1,θi)}\mathcal{S}_i = \{ m\ |\ \theta_m \in [\theta_{i-1}, \theta_i) \}, with 0=θ0<θ1<<θN=π0 = \theta_0 < \theta_1 < \dotsc < \theta_N = \pi (or 2π2\pi) (Mani et al., 2018).
  • Each angular bin defines a segment for local phase correction or low-rank modeling.

Additionally, the projection-based approach treats each acquired radial spoke as an individual 1D patch: pkCLp_k \in \mathbb{C}^L, where LL is the number of frequency-encoding samples per spoke (Gao et al., 2022).

3. Algorithms and Embedding into Deep Architectures

The translation from raw k-space to a token sequence suitable for deep learning involves several steps, especially for Transformer-based models on radial data:

  1. Compute the radial distance r(x,y)r(x, y) for every (x,yx, y).
  2. Flatten and sort all samples by rr.
  3. Partition the sorted list into contiguous segments (rings) of size PP.
  4. (Optionally) apply learnable complex-valued weights to each ring.
  5. Flatten each ring to a vector and pass through a complex linear embedding.
  6. Add a complex positional encoding (e.g., Rotary Position Embedding or learnable complex embeddings).
  7. Stack all embedded ring tokens as input to a complex-valued Vision Transformer; all layers operate in C\mathbb{C}.
  1. Each spoke pkp_k is inverse Fourier transformed along kxk_x to obtain a complex “projection”.
  2. Real and imaginary parts of the projection are concatenated (dimensionality $2L$).
  3. Sequences of NinN_{\mathrm{in}} acquired spokes are embedded as Transformer tokens.
  4. Learnable sinusoidal positional codes are added.
  5. Encoder-decoder Transformer predicts missing spokes, which are then reassembled to reconstruct full k-space.
  1. Spokes are grouped by angular bin.
  2. For each group, reconstruct an initial k-space estimate (typically via adjoint NUFFT).
  3. Extract small sliding windows in k-space from each group and assemble block Hankel matrices.
  4. The multi-block Hankel matrix is regularized via nuclear norm minimization (low-rank prior), alternating with data consistency updates tied to the original measured spokes.

4. Theoretical Justification, Empirical Evidence, and Ablation Results

Radial patching is driven by the spectral and statistical nonuniformity of k-space:

  • Energy alignment: Rings group points of similar r|r|, and thus similar expected signal and noise characteristics. Central rings prioritize low-frequency, high-SNR samples, while outer rings capture edge-detail frequencies (Rempe et al., 26 Jan 2026).
  • Global coherence: Radial rings obtain nonlocal, wraparound support, leveraging self-attention’s ability to model long-range dependencies, which grid patches cannot emulate.
  • Empirical ablation: On fastMRI prostate data, performance (AUROC) drops substantially when deviating from the optimized ring count (e.g., from 0.782 at 16 rings to 0.750 with 8 rings, 0.762 at 32 rings; further decreases observed with grid-patching) (Rempe et al., 26 Jan 2026).
  • Artifact suppression: In projection-based Transformer reconstruction, organizing input as spoke-sequence tokens enables the network to capitalize on the temporal and angular redundancy of the acquisition, yielding markedly improved PSNR and SSIM compared to conventional U-Net methods (Gao et al., 2022).

5. Impact on Computational Efficiency and Robustness

Radial patching offers substantial benefits in both deep learning classification and MRI reconstruction:

  • VRAM savings: Radial ring-based patching reduces the number of tokens processed by the transformer backbone (e.g., N=16N=16 rings vs. $64$ or $256$ grid patches), enabling memory usage reductions by up to 68×68\times (kViT at $0.52$ GB vs. EfficientNet-B0 at $35.4$ GB in certain scenarios) (Rempe et al., 26 Jan 2026).
  • Undersampling robustness: In high-acceleration scenarios (e.g., 16×16\times acceleration), kViT with radial rings preserves higher AUROC (0.770±0.0300.770 \pm 0.030) than leading image-based baselines (ResNet50 falls to 0.703±0.0430.703 \pm 0.043; grid-based ViT falls further to 0.563±0.0620.563 \pm 0.062).
  • Artifact correction: In low-rank multi-block Hankel reconstructions, angular segment patching enables successful removal of phase-driven artifacts without explicit trajectory calibration, yielding artifact-free images under realistic gradient errors (Mani et al., 2018).
  • Transformer acceleration: By operating directly on non-Cartesian tokens, repeated regridding and expensive NUFFT operations are minimized or rendered inference-only, substantially accelerating training and inference (Gao et al., 2022).

6. Implementation Parameters and Data Augmentation

Studies provide key implementation details and training strategies:

  • Radial rings (kViT) (Rempe et al., 26 Jan 2026): Typical configuration N=16N=16 rings, P3800P \approx 3800 samples per ring, embedding dimension d=256d=256, $6$ layers, $16$ attention heads, dropout =0.1=0.1.
  • Projection-based Transformers (Gao et al., 2022): Spoke tokens of dimension $1024$ (from $512$ complex samples), sequences of $100$ input tokens per window, $6$ encoder/decoder blocks, $16$ heads.
  • Artifact correction (Mani et al., 2018): Angular bins N=4,6,8N=4,6,8, window sizes for Hankel construction w×ww \times w, low-rank penalties (λ\lambda) and data consistency tradeoffs (μ\mu) set via empirical tuning.
  • Data augmentation: For generalizability, standard image domain augmentations are performed with appropriate FFT/IFFT conversions; in the projection-based paradigm, multiple anatomical regions, slices, and coil images are combined, with windowed spoke sequences producing up to 193,000 training samples from only a handful of subjects (Gao et al., 2022).

7. Limitations, Generalizability, and Future Directions

Radial k-space patching provides a physics-informed foundation for both reconstruction and direct k-space learning, yet several limitations and open avenues remain:

  • Absence of explicit k-space consistency modules: Certain deep learning frameworks (e.g., PKT (Gao et al., 2022)) rely on pure data-driven inference and lack explicit data-consistency projections, which could be beneficial in low-SNR or highly undersampled scenarios.
  • Ring versus spoke granularity: The ideal number of rings or angular bins remains a function of sampling density, artifact distribution, and downstream task; empirical tuning is required (Rempe et al., 26 Jan 2026, Mani et al., 2018).
  • Extension to other non-Cartesian trajectories: While the radial paradigm is dominant, spiral and variable-density sampling schemes can, in principle, be addressed by analogous partitioning—though the geometric rationale must be re-examined (Gao et al., 2022).
  • Loss functions and supervision: Most studies to date rely on pixel-wise MSE/2\ell_2 losses; perceptual or adversarial objectives may offer quality improvements.

Radial k-space patching remains a cornerstone of both algorithmic and deep learning innovation in non-Cartesian MRI, providing structural alignment with MRI physics, enabling robust inference under practical constraints, and setting the stage for future integration of physics-based and data-driven paradigms.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Radial k-Space Patching Strategy.