IlluminateNet: Unsupervised Underwater Enhancer
- IlluminateNet is a fully unsupervised CNN module that enhances underwater images by adaptively correcting luminance and restoring color balance.
- It utilizes a dual-stream architecture with a channel-wise atmospheric-light estimator and transmission map estimation for global and local corrections.
- Empirical results show significant improvements in UCIQE, UIQM, and keypoint repeatability, boosting performance in robotic vision and underwater imaging tasks.
IlluminateNet is a fully unsupervised convolutional neural network (CNN) module designed for adaptive luminance enhancement in underwater images. Developed within the DIVER (Domain-Invariant Visual Enhancement and Restoration) framework, IlluminateNet aims to recover achromatic, brightness-balanced renderings of severely degraded raw underwater photographs. It achieves domain-invariant correction of illumination and color loss caused by wavelength-dependent attenuation, scattering, and illumination non-uniformity encountered in diverse aquatic environments, including shallow, deep, and turbid scenes. IlluminateNet is specifically invoked for low-light scenarios, delivering substantial improvements in both perceptual quality and downstream robotic vision metrics (Makam et al., 30 Jan 2026).
1. Architectural Structure and Processing Workflow
IlluminateNet operates on raw underwater RGB images and produces an illumination-corrected output . The architecture consists of the following primary components:
- Channel-wise Atmospheric-Light Estimator: Each color channel (, , ) of is processed by a small per-channel CNN consisting of convolutional layers and ReLU nonlinearities, generating feature maps . These are concatenated and passed through an element-wise nonlinearity to compute a global atmospheric light estimate:
The activation constrains output to per channel.
- Transmission Map Estimation (Hybrid Rule + CNN): A patch-max difference is computed relative to an “ambient light” statistic , defined as the mean of the top-0.1% farthest-depth pixels according to a learned depth model:
This is further refined by a CNN layer with ReLU to yield a smoothed transmission map .
- Luminance Residual Computation: The transmission-guided luminance residual is computed pointwise as
- Fusion and Skip Connection: The final illumination-corrected image is obtained via additive fusion (“residual skip”) of and :
ReLU activations are employed throughout all convolutional layers, with a nonlinearity exclusively at the atmospheric-light stream output.
This architecture decouples the estimation of global illumination (via ) from local, transmission-guided correction (via ), supporting compensation for both spatially global and locally variant degradations.
2. Mathematical Model and Image Formation
IlluminateNet’s formulation is based on a simplified underwater image formation prior derived in DIVER:
Solving for :
A patch-based maximum transmission map is estimated:
where is the neighborhood of pixel . The global-light estimate is produced by
Residual smoothing is performed as:
This model provides an interpretable mechanism for disentangling illumination correction from color channel balancing, and it enforces physical plausibility in the correction process.
3. Loss Functions and Unsupervised Training Paradigm
IlluminateNet leverages only unpaired underwater images for training, relying on unsupervised objectives that avoid dependence on reference clean ground truth. Two complementary losses are used:
- Gray-World Loss (Chromatic Neutrality):
This loss penalizes deviation from global channel-wise mean equality, enforcing achromatic (gray-world) neutrality.
- Luminous Loss (Exposure Consistency):
where is a mid-gray or white target (e.g., $0.5$ or $1.0$).
The total loss is a weighted sum:
with , . The network is optimized with Adam at a learning rate of for 150 iterations with batch size $8$.
Domain-invariance is enforced by sampling mixed minibatches from all eight training datasets, encompassing a range of water types and illumination regimes, and exclusively utilizing loss functions that generalize across domains.
4. Integration within the DIVER Framework
IlluminateNet serves as one of two initial illumination correction modules within the DIVER pipeline (Makam et al., 30 Jan 2026). For each input, an Illumination Assessment Gate computes average red, green, and blue values to assess scene lighting. If or , indicating pronounced low-light conditions, the pipeline invokes IlluminateNet; otherwise, a Spectral Equalization Filter (SEF) is used. The output, (for low light) or (for well-lit), is passed to the Adaptive Optical Correction Module (AOCM) for hue and contrast refinement, and subsequently to Hydro-OpticNet for physics-guided dehazing and attenuation compensation.
5. Empirical Performance and Ablation Studies
IlluminateNet demonstrates substantial improvements in quantitative and qualitative metrics. On the low-light SeaThru dataset, the incorporation of IlluminateNet boosts UCIQE from $0.1062$ (raw) to $0.7007$ () and UIQM from $0.9980$ to $2.3685$. Further downstream modules slightly modify these scores, with full DIVER culminating at UCIQE $0.8470$ and UIQM $2.8685$. On UFO-120, state-of-the-art methods exhibit only incremental gains, whereas integration of IlluminateNet (via SEF + AOCM + Hydro-OpticNet) yields increases in PSNR from $12.67$ dB (raw) to $23.69$ dB and UCIQE to $0.9620$. Color-chart fidelity on SeaThru is also improved, with GPMAE (geodesic color error) reducing from raw errors to $2$- within DIVER.
For robotic perception tasks, such as ORB-based keypoint repeatability and matching, IlluminateNet alone significantly increases the number of stable keypoints detected—from single digits in raw input to hundreds after correction, and over $1,000$ with full DIVER processing. This increase in repeatable matches suggests improved robustness for vision-based robotic tasks in challenging underwater conditions.
6. Data Regime, Optimization, and Domain-Generalization
IlluminateNet is trained using unpaired underwater images sourced from eight diverse datasets: SeaThru, OceanDark, USOD10K, U45, FISHTRAC, UIEB, UFO-120, and LSUI. Images are resized or cropped, and ambient light is computed from the most distant pixels inferred by a depth model. No ground truth references are utilized. The training configuration—characterized by global minibatch mixing and loss terms independent of water type—promotes domain-invariant operation, with the module maintaining generalized performance across varied aquatic settings.
7. Summary and Context within Underwater Enhancement
IlluminateNet is a lightweight, standalone CNN module embedded in the DIVER architecture for robust, domain-invariant luminance and color restoration of underwater images. It operates via a learnable atmospheric-light map plus a transmission-guided residual, and is trained using simple, physically motivated losses. Its contribution is critical to DIVER’s superior performance over prior state-of-the-art methods, accounting for more than gain in UCIQE and over reductions in chromatic error on challenging benchmarks. Its unsupervised, domain-agnostic training methodology and transparent physical modeling differentiate it from previous approaches and underpin its effectiveness for both human and machine-based downstream applications (Makam et al., 30 Jan 2026).