Papers
Topics
Authors
Recent
Search
2000 character limit reached

MIU2Net: Deep Learning for Mass Inversion

Updated 27 January 2026
  • MIU2Net is a deep learning framework that performs high-fidelity weak lensing mass inversion using a nested U-Net architecture.
  • It leverages multi-scale feature extraction to accurately recover both pixel-level convergence maps and statistical power spectra under realistic observational conditions.
  • The method employs a composite loss function combining MSE and RAPS to ensure spatial precision and robust frequency-domain reconstruction.

MIU2Net is a deep learning framework developed for high-fidelity inversion of weak gravitational lensing shear fields into mass (convergence, κ\kappa) maps, specifically addressing challenges anticipated in large-scale surveys such as CSST and Euclid. MIU2Net leverages the nested U-structure of U²-Net to enable precise recovery of both pixel-level convergence and the statistical power spectrum, outperforming prior methods in both root-mean-square error (RMSE) and frequency-domain accuracy under realistic survey conditions including shape noise, reduced shear nonlinearity, and complex masking (G. et al., 20 Jan 2026).

1. Architecture: Nested U²-Net Design

MIU2Net is based on the two-level U-Net topology introduced as U²-Net (Qin et al., 2020), which consists of a deep outer U-Net where each encoder and decoder stage contains an inner U-Net "ResU-block". This design enables feature extraction at multiple scales and dramatically increases the network's effective receptive field while maintaining moderate parameter count.

  • Outer U: Six encoder and six decoder stages, connected via conventional long-range skip-connections.
  • ResU-blocks (Inner Us): Each comprises two down-sampling and two up-sampling layers with internal skip-connections, as well as a residual connection linking the block input directly to its output.
  • Side outputs: After each decoder stage, a 1×11 \times 1 convolution produces an intermediate prediction, Sside(m)\mathcal S_{\rm side}^{(m)}, m=1...6m=1...6; these are fused via another 1×11 \times 1 convolution to yield the final κfuse\kappa_{\rm fuse} map.
  • Input/Output: MIU2Net takes two input channels (noisy shear or reduced shear components) and produces one output channel (κ^\hat\kappa).
  • Skip connections: Extensive lateral skip paths both in the main U-structure (encoder-to-decoder) and inside each nested ResU-block.

This multi-scale nested scheme yields sensitivity to both large-scale filamentary lensing signals and small-scale cluster peaks.

2. Mathematical Formulation of Weak-Lensing Mass Inversion

The method maps observed shear, γ\gamma, to the surface mass density (convergence), κ\kappa, describing the dark matter distribution.

  • Lensing equations:
    • κ(θ)=122ψ(θ)\kappa(\bm\theta) = \tfrac12\nabla^2\psi(\bm\theta)
    • γ1=12(1222)ψ\gamma_1 = \tfrac12(\partial_1^2-\partial_2^2)\psi, γ2=12ψ\gamma_2 = \partial_1\partial_2\psi, with γ=γ1+iγ2\gamma = \gamma_1 + i\gamma_2
  • Forward convolution (Kaiser–Squires):

γ(θ)=1πd2θ D(θθ)κ(θ)\gamma(\bm\theta) = \frac1\pi \int d^2\theta'~\mathcal D(\bm\theta-\bm\theta') \,\kappa(\bm\theta')

with D(θ)=1/(θ1iθ2)2\mathcal D(\bm\theta) = -1/(\theta_1-i\theta_2)^2

  • Fourier inversion:

κ~(k)=π1γ~(k)D~(k),k0\tilde\kappa(\bm k) = \pi^{-1} \tilde\gamma(\bm k) \tilde{\mathcal D}^*(\bm k),\qquad \bm k \neq 0

  • Noise model:

σn2=σϵ22θs2ng\sigma_n^2 = \frac{\sigma_\epsilon^2}{2\,\theta_s^2\,n_g}

where σϵ0.4\sigma_\epsilon \approx 0.4 (galaxy ellipticity rms), θs\theta_s is pixel size in arcmin, and ngn_g is source density.

  • Reduced shear (g=γ/(1κ)g = \gamma/(1-\kappa)) is used in observations where κ\kappa is not negligible.

This framework captures the ill-posedness introduced by shape noise, masking, and reduced shear nonlinearity.

3. Loss Function and Training Strategy

MIU2Net employs a composite loss to jointly optimize for both pixel-level accuracy and correct two-point (power spectrum) statistics:

  • Total loss:

L=wfuse lfuse+m=16wside(m) lside(m)\mathcal L = w_{\rm fuse}~l_{\rm fuse} + \sum_{m=1}^6 w_{\rm side}^{(m)} ~l_{\rm side}^{(m)}

  • wfuse=1w_{\rm fuse}=1, wside(m)=1w_{\rm side}^{(m)}=1
    • Each loss component:

l{fuse, side}=αlMSE+βlRAPSl_{\{\rm fuse,~side\}} = \alpha\,l_{\rm MSE} + \beta\,l_{\rm RAPS}

  • lMSEl_{\rm MSE}: pixelwise mean-square error, enforced over valid (unmasked) pixels.
  • lRAPSl_{\rm RAPS}: azimuthal mean-absolute error between the true and predicted convergence power spectra, up to radii r<rmaxr < r_{\rm max}, emphasizing accurate two-point statistics to multipole 500\ell\simeq500.
  • Typical settings: α=1\alpha=1, β=3\beta=3, rmax=16r_{\rm max}=16 pixels.
    • Side-output supervision: All intermediate decoder outputs are supervised through side losses.

This dual-objective training avoids the mode-collapse or spectral bias observed in MSE- or MAP-only optimizations, enabling the resulting κ\kappa maps to simultaneously preserve peak structure and power-spectrum statistics.

4. Simulation Protocol, Data Preprocessing, and Observational Realism

Training and evaluation leverage cosmological NN-body simulations and ray-tracing, augmented to match realistic survey systematics:

  • Simulation details: Four independent boxes (each 320h1320\,h^{-1}Mpc, 6403640^3 particles), multiple redshift slices, yielding 6000 (γ,κ)(\gamma,\kappa) map pairs.
  • Data splits: $5000$ for training, $1000$ for validation.
  • Preprocessing:
    • On-the-fly addition of shape noise at ng=20arcmin2n_g=20\,\textrm{arcmin}^{-2}.
    • Generation of complex masks (0–25% area coverage), produced as unions of disks.
    • Input maps cropped and downsampled to 256×256256\times256.
    • Data augmentations: rotations (multiples of 90°), horizontal/vertical flips.
  • Training regime:
    • AdamW optimizer (initial lr 10410^{-4}, cosine annealing to 101010^{-10} over 2000 epochs, batch size 128).
    • First epoch uses Huber loss (threshold 50) for initial stability.
    • Training on a single NVIDIA A100 requires 1.5\sim1.5 min/epoch; plateau within 256 epochs, total 2000 for final model.

This results in a network robust to the systematics and incompleteness inherent to actual wide-area survey data.

5. Quantitative Performance Evaluation

MIU2Net is evaluated against several mass inversion methodologies:

  • Root-mean-square error (RMSE):

RMSE(σ)=ivalid[Zσ(i)Xσ(i)]2/ivalidZσ(i)2\mathrm{RMSE}(\sigma) = \sqrt{ \sum_{i\in\mathrm{valid}} [Z_\sigma(i) - X_\sigma(i)]^2 / \sum_{i\in\mathrm{valid}} Z_\sigma(i)^2 }

where ZσZ_\sigma, XσX_\sigma are the smoothed (FWHM σ\sigma) true and predicted κ\kappa within valid (unmasked) pixels. - At σ=0\sigma=0 (no smoothing): MIU2Net achieves 83% lower RMSE than Kaiser–Squires (KS), 5% lower than U-Net, and results comparable to Wiener Filtering (WF) and MCALens. - At σ=1\sigma=1': 34% improvement over KS, 38% over U-Net.

  • Recovered convergence power spectrum P()P(\ell):
    • MIU2Net reconstructs the power spectrum with 4% error up to 500\ell\approx500, far exceeding KS/WF (20%\gtrsim20\% error) and U-Net (37%\sim37\%).
  • Additional map statistics:
    • Dynamic range: Accurate minima/maxima recovery (no over-smoothing).
    • Peak location/amplitude: Centroid errors 1\sim1 pixel, amplitude bias <5%<5\%.
    • Convergence PDF: Correct log-normality, matches true PDF including high-κ\kappa tails.
  • These accuracies persist under additional shape noise and moderate (20%\leq 20\%) masking, and MIU2Net generalizes to a different cosmology without retraining (power error <11%<11\% at 500\ell\approx500).
Method RMSE (σ=1\sigma=1') Pκ(500)P_\kappa(\ell\simeq500) Error Peak Accuracy
MIU2Net lowest 4% \sim1 px, <5%<5\%
U-Net \sim38% higher \sim37% inferior
KS/WF/MCA \gg higher 20%\gtrsim 20\% inferior

6. Advantages and Methodological Significance

MIU2Net introduces several methodological advantages for cosmic shear mass mapping:

  • Receptive Field: The nested U-structure combines broad spatial context (for reconstructing diffuse filaments) with localized sensitivity (cluster and peak resolution) at a moderate parameter cost.
  • Loss Engineering: The joint MSE ++ RAPS loss surmounts the conventional bias-variance/spectral-trade-off, enabling the model to deliver physically meaningful κ\kappa maps that are simultaneously spatially accurate and statistically consistent.
  • Observational Robustness: By simulating noise, masking, and nonlinearities within the input pipeline, MIU2Net performs one-step denoising and inpainting that avoids the need for iterative or hand-tuned post-processing steps.

A plausible implication is that this end-to-end strategy positions MIU2Net as an enabling technology for cosmological parameter extraction directly from reconstructed convergence fields in next-generation surveys (G. et al., 20 Jan 2026).

7. Context, Limitations, and Prospects

MIU2Net demonstrates significant progress over traditional mass inversion (Kaiser–Squires, Wiener Filtering), inpainted solutions (MCALens), and even prior learning-based reconstructors (U-Net, DeepMass) by achieving both high spatial and frequency-domain fidelity under survey-realistic conditions. It is particularly suited for next-generation datasets with non-uniform coverage, complex masking, and high noise, where ad-hoc smoothing and inpainting often compromise scientific signal.

Current limitations include the absence of explicit uncertainty estimation per-pixel and the dependence on simulation-based training; further validation against real survey data and extension to full posterior inference remain desirable future directions.

MIU2Net represents a substantial advance in weak gravitational lensing mass inversion, providing a foundation for extracting dark matter maps and cosmological information from massive survey datasets with unprecedented reliability and statistical consistency (G. et al., 20 Jan 2026).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to MIU2Net.