Equivariant Image Dehazing (EID)
- Equivariant Image Dehazing (EID) is an unsupervised method combining physics-based haze modeling with self-supervised equivariant constraints to recover clear images.
- It leverages an adversarial pseudo-hazing module along with dual regularization losses to maintain image consistency and preserve fine details without paired data.
- Experimental results show EID outperforms state-of-the-art methods in both natural and scientific imaging, demonstrating its versatility and robustness.
Equivariant Image Dehazing (EID) is an unsupervised learning framework for restoring clear images from hazy observations by leveraging image symmetry and self-supervised equivariant constraints. EID uniquely combines a physics-based consistency term with a self-supervised equivariance prior and an adversarially trained pseudo-haze generator. This synthesis enables robust dehazing in both natural and scientific imaging domains without reliance on paired ground-truth data, outperforming existing state-of-the-art methods across multiple benchmarks (Wen et al., 20 Jan 2026).
1. Haze Formation and Physical Model
EID builds on the classical atmospheric scattering model, succinctly formulated as
where is the hazy observation at pixel , is the latent scene radiance, denotes the transmission with attenuation coefficient and scene depth , and is the global atmospheric light. In practical natural datasets, and can often be estimated via priors such as the Dark-Channel Prior (DCP). However, in scientific contexts (e.g., medical endoscopy, microscopy), the haze-generating operator is typically nontrivial and may deviate from the canonical model. EID addresses this by learning a pseudo-hazing operator via adversarial training, obviating the need to assume knowledge of or (Wen et al., 20 Jan 2026).
2. Equivariance and Consistency Regularization
Recognizing that certain group transformations (e.g., rotations by multiples of 90°) leave the distribution of clean images invariant, EID formalizes the equivariance constraint. For a dehazing network and transformation ,
ideally holds for all . Lacking access to clean images, EID applies to hazy inputs , generates the current clean estimate, re-applies the haze model, and enforces equivariance via the loss
Parallel to this, haze-consistency regularization ensures that re-hazing a network output should faithfully reconstruct the original hazy image:
This dual regularization enforces inversion of and encourages the recovery of “null-space” (detail-lost) components (Wen et al., 20 Jan 2026).
3. Adversarial Pseudo-Hazing Module
To address the absence of closed-form haze operators in complex domains, EID introduces a pseudo-hazing generator and discriminator using an unpaired, Cycle-GAN-style adversarial setup. The adversarial loss governing realism of synthetic hazy images is:
where samples from clear images and from hazy images. A cycle-consistency loss ensures structure preservation:
Once adequately trained, is frozen to serve as in subsequent EID training stages (Wen et al., 20 Jan 2026).
4. Training Objective and Architecture
The total EID loss function is a linear combination of consistency and equivariance losses:
with used in the main experiments. The architecture consists of:
- A 5-layer U-Net () for dehazing (input/output: ).
- Pseudo-hazing generator : U-Net style, and Patch-GAN discriminator .
- : 2D rotations by multiples of 90°; empirical results emphasize the importance of these transformations.
- Optimizer: Adam; learning rate , halved every 20 epochs for 50 epochs total.
- No use of ground truth pairs; random rotation as data augmentation.
Computation is performed on a single NVIDIA RTX 3090 (Wen et al., 20 Jan 2026).
5. Experimental Validation and Performance
EID’s efficacy spans both scientific and natural image dehazing tasks:
| Dataset | Training Split | Metrics | Comparative Performance |
|---|---|---|---|
| Cholec80-Haze (Endoscopy) | 1,100 hazy, 2,726 clear (unpaired) | NIQE↓, BRISQUE↓, FID↓ | EID achieves best scores |
| Cell97 (Microscopy) | 49 clear, 48 hazy (training); 97 hazy test | NIQE↓, BRISQUE↓, FID↓ | EID achieves best scores |
| RESIDE-OTS/HSTS (Natural) | 4,500 clear vs. 4,200 haze (unpaired) | PSNR↑, SSIM↑ | EID outperforms nine state-of-the-art unsupervised methods |
Qualitative analysis (Figures 5–10 in the source) confirms that EID yields sharper edges, truer colors, and finer detail than priors-based methods (DCP, NLP) or other unpaired GAN approaches (CycleGAN, Cycle-Dehaze, D4+, UME-Net, YOLY, etc.) (Wen et al., 20 Jan 2026).
6. Methodological Implications, Constraints, and Extensions
EID’s equivariant regularization functions as “null-space” supervision: it encourages the network to infer fine structure not directly recoverable from the hazy input. The method does not require paired clear-hazy images or direct ground truth, increasing its applicability where labeled datasets are limited. Notable considerations include:
- Real-world scenes may violate exact group symmetry assumptions, for example due to non-rigid motion or transformations beyond pure rotation.
- The quality of the pseudo-hazing GAN influences overall EID performance; insufficiently trained GAN modules may propagate artifacts.
- Training overhead is higher than that for direct feed-forward systems.
Potential extensions mentioned include the integration of domain-specific priors, the adoption of lightweight architectures for real-time applications, fusion of multimodal inputs (e.g., IR + RGB), and the application of the equivariant paradigm to other inverse imaging problems such as underwater enhancement and pansharpening (Wen et al., 20 Jan 2026).
7. Summary and Context within Dehazing Research
Equivariant Image Dehazing constitutes a principled unsupervised approach that unifies physics-grounded constraints and equivariant learning. By adversarially modeling the haze process and leveraging self-supervision via group symmetries, EID achieves state-of-the-art results in challenging scientific and natural domains without reliance on paired training data. The framework’s generality and empirical strength position it as a versatile tool and conceptual advance in the broader context of inverse imaging problems (Wen et al., 20 Jan 2026).