Papers
Topics
Authors
Recent
Search
2000 character limit reached

LoopExpose: Unsupervised Exposure Correction

Updated 12 November 2025
  • LoopExpose is an unsupervised framework for exposure correction that employs a nested-loop strategy and luminance ranking loss to enhance images captured under varied lighting conditions.
  • It jointly optimizes a correction network and evolving pseudo-labels from multi-exposure fusion, bridging the gap with supervised methods.
  • Experimental results show LoopExpose outperforms traditional unsupervised approaches in both single-exposure correction and multi-exposure fusion tasks.

LoopExpose is an unsupervised framework for arbitrary-length exposure correction, which introduces a nested-loop optimization strategy to achieve high-fidelity enhancement of images captured under varied lighting conditions. By sidestepping the need for paired supervision, LoopExpose jointly optimizes a correction network and its training targets—pseudo-labels derived from multi-exposure fusion—via iterative feedback. A central innovation is the integration of a luminance ranking loss, enforcing physically plausible luminance ordering within input sequences. Experimental evidence across major benchmarks demonstrates that LoopExpose consistently outperforms existing unsupervised exposure correction methods and narrows the performance gap to supervised approaches.

1. Problem Formulation and Motivation

The exposure correction task is defined over a sequence of NN images I={I1,I2,,IN}I = \{I_1, I_2, \ldots, I_N\} of a single scene, each captured at a distinct exposure value (EV). The objective is to produce a set of corrected outputs EiE_i that accurately approximate the scene radiance as if captured under canonical, well-exposed conditions. Two primary variants are recognized:

  • Single-Exposure Correction (SEC): Where N=1N = 1, each input requires independent correction.
  • Multi-Exposure Fusion (MEF): Where N>1N > 1, inputs are fused to produce a single high-quality image.

Supervised approaches to exposure correction are hamstrung by the difficulty and subjectivity inherent in assembling large datasets of paired inputs and ground truth. Exposure errors—modeled as Iiobs=f(ErealΔti)I_i^{obs} = f(E^{real} \cdot \Delta t_i) with Δti\Delta t_i the exposure time and f()f(\cdot) the camera response—do not admit straightforward self-supervised learning, as they are structured and nonzero-mean. Classical exposure fusion algorithms, notably the method of Mertens et al., demonstrate that plausible fused reference images can be generated directly from the input sequences, providing a foundation for pseudo-supervised approaches.

The optimization target for LoopExpose is articulated as:

minimizeθ  Ltotal(θ;I,Y,E)\operatorname{minimize}_\theta \; L_{total}(\theta; I, Y, E)

where GθG_\theta is the correction model, I={I1,I2,,IN}I = \{I_1, I_2, \ldots, I_N\}0 is the current pseudo-label (fused image), and I={I1,I2,,IN}I = \{I_1, I_2, \ldots, I_N\}1 is the corrected output. Crucially, I={I1,I2,,IN}I = \{I_1, I_2, \ldots, I_N\}2 evolves throughout training, producing a nested optimization scheme.

2. Nested-Loop Optimization Strategy

LoopExpose implements a two-level loop, alternating between the refinement of pseudo-labels via fusion and network training against these labels. The process operates as follows:

  • Lower Level (Pseudo-Label Generation): Uses the fixed Mertens fusion operator I={I1,I2,,IN}I = \{I_1, I_2, \ldots, I_N\}3 to create a reference image.
    • Warm-Up Phase (I={I1,I2,,IN}I = \{I_1, I_2, \ldots, I_N\}4): I={I1,I2,,IN}I = \{I_1, I_2, \ldots, I_N\}5 (fusion of raw exposures).
    • Joint Optimization Phase (I={I1,I2,,IN}I = \{I_1, I_2, \ldots, I_N\}6): I={I1,I2,,IN}I = \{I_1, I_2, \ldots, I_N\}7 (fusion of inputs and latest corrected outputs).
  • Upper Level (Correction Model Update): For fixed pseudo-labels I={I1,I2,,IN}I = \{I_1, I_2, \ldots, I_N\}8, minimize I={I1,I2,,IN}I = \{I_1, I_2, \ldots, I_N\}9 with respect to EiE_i0 by updating the correction network.

This iterative scheme is formalized as:

  1. EiE_i1
  2. EiE_i2 (if EiE_i3) or EiE_i4
  3. EiE_i5

This design produces a self-reinforcing loop in which improved correction results drive the formation of more informative pseudo-labels, in turn enabling further network refinement.

3. Luminance Ranking Loss

In exposure correction, luminance serves as a dominant cue. LoopExpose incorporates a luminance ranking loss to enforce that, within a multi-exposure sequence ordered by apparent brightness, the network’s global luminance predictions mirror this relational structure.

A luminance-aware network branch outputs a per-frame scalar EiE_i6 for each input EiE_i7. After sorting the sequence from darkest to brightest, the loss is expressed as:

EiE_i8

with EiE_i9 and N=1N = 10. This imposes a soft constraint that N=1N = 11 whenever N=1N = 12 is darker than N=1N = 13. The luminance ranking loss is aggregated with the reconstruction loss in the full training objective.

4. Loss Function and Optimization Objective

The composite loss leveraged for model optimization is:

N=1N = 14

where the pseudo-supervised loss N=1N = 15 for each frame and its pseudo-label is:

N=1N = 16

with standard settings N=1N = 17, N=1N = 18.

The algorithmic workflow can be succinctly described as follows:

Δti\Delta t_i5

5. Network Architecture and Implementation

The exposure correction network N>1N > 13 is constructed in a dual-branch configuration:

  • Luminance-Aware Encoder-Decoder:
    • U-Net-like, extracts multi-scale luminance features.
    • Applies global average pooling and fully connected layers to produce the luminance descriptor N>1N > 14 used for ranking.
    • Feature maps support subsequent fusion stages.
  • Adaptive 3D Look-Up Table (3D-LUT) Module:
    • Comprises N>1N > 15 pre-trained 3D LUTs whose outputs are adaptively blended via weights inferred from input stability.
  • Attention-Based Fusion:

Training is performed using PyTorch on an NVIDIA RTX 4090. The optimizer is Adam N>1N > 16. A 5-epoch warm-up (initial learning rate decaying from N>1N > 17 to N>1N > 18) precedes a joint optimization phase with fixed learning rate N>1N > 19 for 20–30 epochs. Batch size is 8 sequences per GPU; augmentation includes random cropping (Iiobs=f(ErealΔti)I_i^{obs} = f(E^{real} \cdot \Delta t_i)0) and horizontal/vertical flips.

Key datasets:

  • SeqMSEC: From MSEC, 5 images/scene, EV Iiobs=f(ErealΔti)I_i^{obs} = f(E^{real} \cdot \Delta t_i)1.
  • SeqRadio / PlainRadio: From Radiometry512, 7 images/scene, EV Iiobs=f(ErealΔti)I_i^{obs} = f(E^{real} \cdot \Delta t_i)2.
  • Training samples one sequence per batch; inference supports arbitrary sequence lengths.

6. Experimental Evaluation

6.1. Quantitative Performance

On MSEC:

  • SEC (Iiobs=f(ErealΔti)I_i^{obs} = f(E^{real} \cdot \Delta t_i)3): LoopExpose achieves PSNR Iiobs=f(ErealΔti)I_i^{obs} = f(E^{real} \cdot \Delta t_i)4 20.59 dB, SSIM Iiobs=f(ErealΔti)I_i^{obs} = f(E^{real} \cdot \Delta t_i)5 0.833. This surpasses all previous unsupervised methods (max 18.62 dB/0.807) and approaches supervised SOTA (21.82 dB/0.850).
  • MEF (Iiobs=f(ErealΔti)I_i^{obs} = f(E^{real} \cdot \Delta t_i)6): Fusion over corrected Iiobs=f(ErealΔti)I_i^{obs} = f(E^{real} \cdot \Delta t_i)7 yields PSNR Iiobs=f(ErealΔti)I_i^{obs} = f(E^{real} \cdot \Delta t_i)8 21.32 dB, SSIM Iiobs=f(ErealΔti)I_i^{obs} = f(E^{real} \cdot \Delta t_i)9 0.847; exceeding both Mertens fusion (19.41 dB/0.825) and deep MEF baselines.

On Radiometry512:

  • SEC: PSNR Δti\Delta t_i0 21.53 dB, SSIM Δti\Delta t_i1 0.821 (UEC baseline: 18.89 dB/0.779, supervised CoTF: 23.99 dB/0.861).
  • MEF: PSNR Δti\Delta t_i2 23.22 dB, SSIM Δti\Delta t_i3 0.889 (Mertens: 21.66 dB/0.842).

6.2. Qualitative Outcomes

LoopExpose consistently corrects under- and over-exposed regions without introducing artificial color casts. Textures and structures are preserved, with highlights maintaining detail rather than exhibiting saturation. Compared to alternatives like LACT and CoTF, outputs often display more neutral tone mapping, especially in highly saturated areas.

6.3. Ablation Studies

  • Incorporating Δti\Delta t_i4 improves SEC PSNR from 20.72 to 21.01 dB and MEF PSNR from 21.11 to 21.32 dB.
  • Two-stage nested optimization is superior:
    • Warm-up only: SEC 20.36 dB, MEF 20.83 dB.
    • Joint only: SEC 20.83 dB, MEF 21.21 dB.
    • Full two-stage: SEC 21.01 dB, MEF 21.32 dB.

7. Significance and Implications

LoopExpose substantiates that a nested combination of rule-based exposure fusion and data-driven correction, regulated by pseudo-supervision and luminance ranking, yields state-of-the-art unsupervised exposure correction. The methodology achieves significant quantitative improvements and robust qualitative enhancement without ground-truth pairs, demonstrating the viability of unsupervised learning in this traditionally supervised domain. The framework generalizes to varying sequence lengths, is readily optimized using standard hardware and open tools, and can serve as a baseline for future investigation of self-supervised image enhancement strategies.

A plausible implication is that this nested-loop pseudo-labeling paradigm may be transferable to other ill-posed image restoration tasks where ground-truth acquisition is a bottleneck, provided the existence of robust, rule-based reference signal generators.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to LoopExpose.