Papers
Topics
Authors
Recent
Search
2000 character limit reached

Equivariant Image Dehazing (EID)

Updated 27 January 2026
  • Equivariant Image Dehazing (EID) is an unsupervised method combining physics-based haze modeling with self-supervised equivariant constraints to recover clear images.
  • It leverages an adversarial pseudo-hazing module along with dual regularization losses to maintain image consistency and preserve fine details without paired data.
  • Experimental results show EID outperforms state-of-the-art methods in both natural and scientific imaging, demonstrating its versatility and robustness.

Equivariant Image Dehazing (EID) is an unsupervised learning framework for restoring clear images from hazy observations by leveraging image symmetry and self-supervised equivariant constraints. EID uniquely combines a physics-based consistency term with a self-supervised equivariance prior and an adversarially trained pseudo-haze generator. This synthesis enables robust dehazing in both natural and scientific imaging domains without reliance on paired ground-truth data, outperforming existing state-of-the-art methods across multiple benchmarks (Wen et al., 20 Jan 2026).

1. Haze Formation and Physical Model

EID builds on the classical atmospheric scattering model, succinctly formulated as

I(x)=J(x)t(x)+A(1t(x)),I(x) = J(x)\, t(x) + A(1 - t(x)),

where I(x)I(x) is the hazy observation at pixel xx, J(x)J(x) is the latent scene radiance, t(x)=eβd(x)t(x) = e^{-\beta d(x)} denotes the transmission with attenuation coefficient β\beta and scene depth d(x)d(x), and AR3A \in \mathbb{R}^3 is the global atmospheric light. In practical natural datasets, β\beta and AA can often be estimated via priors such as the Dark-Channel Prior (DCP). However, in scientific contexts (e.g., medical endoscopy, microscopy), the haze-generating operator H()\mathcal{H}(\cdot) is typically nontrivial and may deviate from the canonical model. EID addresses this by learning a pseudo-hazing operator GhHG_h \approx \mathcal{H} via adversarial training, obviating the need to assume knowledge of AA or β\beta (Wen et al., 20 Jan 2026).

2. Equivariance and Consistency Regularization

Recognizing that certain group transformations GG (e.g., rotations by multiples of 90°) leave the distribution of clean images invariant, EID formalizes the equivariance constraint. For a dehazing network fθf_\theta and transformation TgT_g,

fθ(Tg[I])Tg[fθ(I)],f_\theta(T_g[I]) \approx T_g[f_\theta(I)],

ideally holds for all gGg \in G. Lacking access to clean images, EID applies fθf_\theta to hazy inputs yy, generates the current clean estimate, re-applies the haze model, and enforces equivariance via the loss

Lec=Ey,gfθ(H(Tg[fθ(y)]))Tg[fθ(y)]22.\mathcal{L}_{ec} = \mathbb{E}_{y,\,g}\left\| f_\theta\left(\mathcal{H}(T_g[f_\theta(y)])\right) - T_g[f_\theta(y)] \right\|_2^2.

Parallel to this, haze-consistency regularization ensures that re-hazing a network output should faithfully reconstruct the original hazy image:

Lhc=EyH(fθ(y))y22.\mathcal{L}_{hc} = \mathbb{E}_{y}\left\| \mathcal{H}(f_\theta(y)) - y \right\|_2^2.

This dual regularization enforces inversion of H\mathcal{H} and encourages the recovery of “null-space” (detail-lost) components (Wen et al., 20 Jan 2026).

3. Adversarial Pseudo-Hazing Module

To address the absence of closed-form haze operators in complex domains, EID introduces a pseudo-hazing generator GhG_h and discriminator DhD_h using an unpaired, Cycle-GAN-style adversarial setup. The adversarial loss governing realism of synthetic hazy images is:

GAN=Ey[logDh(y)]+Ex[log(1Dh(Gh(x)))]\ell_{GAN} = \mathbb{E}_{y}[\log D_h(y)] + \mathbb{E}_{x}[\log(1 - D_h(G_h(x)))]

where xx samples from clear images and yy from hazy images. A cycle-consistency loss ensures structure preservation:

cyc=Ey[Gh(Gc(y))y]+Ex[Gc(Gh(x))x].\ell_{cyc} = \mathbb{E}_{y}[\|G_h(G_c(y)) - y\|] + \mathbb{E}_{x}[\|G_c(G_h(x)) - x\|].

Once adequately trained, GhG_h is frozen to serve as H\mathcal{H} in subsequent EID training stages (Wen et al., 20 Jan 2026).

4. Training Objective and Architecture

The total EID loss function is a linear combination of consistency and equivariance losses:

Ltotal=Lhc+λLec,\mathcal{L}_{total} = \mathcal{L}_{hc} + \lambda\,\mathcal{L}_{ec},

with λ=0.1\lambda = 0.1 used in the main experiments. The architecture consists of:

  • A 5-layer U-Net (fθf_\theta) for dehazing (input/output: H×W×3H \times W \times 3).
  • Pseudo-hazing generator GhG_h: U-Net style, and Patch-GAN discriminator DhD_h.
  • GG: 2D rotations by multiples of 90°; empirical results emphasize the importance of these transformations.
  • Optimizer: Adam; learning rate 1×1041 \times 10^{-4}, halved every 20 epochs for 50 epochs total.
  • No use of ground truth pairs; random rotation as data augmentation.

Computation is performed on a single NVIDIA RTX 3090 (Wen et al., 20 Jan 2026).

5. Experimental Validation and Performance

EID’s efficacy spans both scientific and natural image dehazing tasks:

Dataset Training Split Metrics Comparative Performance
Cholec80-Haze (Endoscopy) 1,100 hazy, 2,726 clear (unpaired) NIQE↓, BRISQUE↓, FID↓ EID achieves best scores
Cell97 (Microscopy) 49 clear, 48 hazy (training); 97 hazy test NIQE↓, BRISQUE↓, FID↓ EID achieves best scores
RESIDE-OTS/HSTS (Natural) 4,500 clear vs. 4,200 haze (unpaired) PSNR↑, SSIM↑ EID outperforms nine state-of-the-art unsupervised methods

Qualitative analysis (Figures 5–10 in the source) confirms that EID yields sharper edges, truer colors, and finer detail than priors-based methods (DCP, NLP) or other unpaired GAN approaches (CycleGAN, Cycle-Dehaze, D4+, UME-Net, YOLY, etc.) (Wen et al., 20 Jan 2026).

6. Methodological Implications, Constraints, and Extensions

EID’s equivariant regularization functions as “null-space” supervision: it encourages the network to infer fine structure not directly recoverable from the hazy input. The method does not require paired clear-hazy images or direct ground truth, increasing its applicability where labeled datasets are limited. Notable considerations include:

  • Real-world scenes may violate exact group symmetry assumptions, for example due to non-rigid motion or transformations beyond pure rotation.
  • The quality of the pseudo-hazing GAN influences overall EID performance; insufficiently trained GAN modules may propagate artifacts.
  • Training overhead is higher than that for direct feed-forward systems.

Potential extensions mentioned include the integration of domain-specific priors, the adoption of lightweight architectures for real-time applications, fusion of multimodal inputs (e.g., IR + RGB), and the application of the equivariant paradigm to other inverse imaging problems such as underwater enhancement and pansharpening (Wen et al., 20 Jan 2026).

7. Summary and Context within Dehazing Research

Equivariant Image Dehazing constitutes a principled unsupervised approach that unifies physics-grounded constraints and equivariant learning. By adversarially modeling the haze process and leveraging self-supervision via group symmetries, EID achieves state-of-the-art results in challenging scientific and natural domains without reliance on paired training data. The framework’s generality and empirical strength position it as a versatile tool and conceptual advance in the broader context of inverse imaging problems (Wen et al., 20 Jan 2026).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Equivariant Image Dehazing (EID).