Papers
Topics
Authors
Recent
Search
2000 character limit reached

Event-Based SfP: 3D Reconstruction with Event Cameras

Updated 2 February 2026
  • Event-based SfP is a 3D shape recovery method that leverages event cameras and continuous polarizer rotation to capture rapid polarization changes and surface normals.
  • The approach bins events by polarizer angle and applies physics- and learning-based pipelines to invert Fresnel equations, reducing MAE from over 56° to as low as 26.7°.
  • Enhanced acquisition speed (≥50 Hz vs. 22 Hz) and robustness in specular-dominant environments highlight its potential for dynamic, high-speed 3D imaging applications.

Event-Based Shape from Polarization (SfP) is a computational imaging paradigm that leverages event cameras in combination with polarization optics to reconstruct surface normals and, by extension, 3D shape with high temporal fidelity. The approach circumvents the acquisition speed–resolution tradeoff long associated with classical SfP by exploiting the asynchronous nature and microsecond-scale temporal resolution of event-based vision sensors. This methodology opens a new regime for physics-based and learning-based shape recovery in dynamic and high-speed environments, yielding improvements in both reconstruction accuracy and throughput, notably in scenarios dominated by specular reflectance (Muglikar et al., 2023).

1. Principles of Polarization-Based 3D Reconstruction

Shape from Polarization (SfP) traditionally relies on the analysis of scene radiance observed through a polarizer at several discrete angles, typically using frame-based sensors and stepwise polarizer rotations. When light reflects off a surface, its polarization state encodes cues about local surface orientation due to the Fresnel equations. The measured intensity as a function of polarizer angle θpol\theta_{\mathrm{pol}} at each pixel is modeled as:

I(θpol)=Iun[1+ρcos2(θpolϕ)]I(\theta_{\mathrm{pol}}) = I_{\mathrm{un}}[1 + \rho\cos{2(\theta_{\mathrm{pol}} - \phi)}]

where IunI_{\mathrm{un}} is the unpolarized (mean) component, ρ\rho the degree of polarization, and ϕ\phi the angle of polarization. Classic SfP reconstructs surface normals by first estimating (ρ,ϕ)(\rho, \phi) and then applying an inversion of the Fresnel model, with explicit analytic formulas relating polarization to geometric parameters under specular-dominant assumptions.

2. Event Camera Integration: Hardware and Event Encoding

The event-based SfP pipeline introduces a linear polarizer mounted on a high-speed motor (~1,500 RPM) positioned before an event camera. Unlike frame-based polarimeters, which sample several discrete polarization states per exposure, the event camera continuously senses log-intensity changes at microsecond latency, generating events (xk,tk,pk)(x_k, t_k, p_k) whenever

L(xk,tk)L(xk,tkΔtk)=pkCL(x_k, t_k) - L(x_k, t_k - \Delta t_k) = p_k C

with L=logIL = \log I, pk{+1,1}p_k \in \{+1, -1\}, and CC a sensor-intrinsic contrast threshold. The continuous sinusoidal modulation of I(θpol(t))I(\theta_{\mathrm{pol}}(t)) by the spinning polarizer causes bursts of events at each pixel, encoding the periodicity and phase associated with the underlying polarization information (Muglikar et al., 2023).

3. Mathematical Formulation and Event-Based Polarization Estimation

Event-based SfP adapts the classical multi-angle formulation as follows:

  • Events are binned by the inferred polarizer angle θpol=ωtk\theta_{\mathrm{pol}} = \omega t_k (with ω\omega the rotation speed), discretizing [0,π)[0, \pi) into NN angular bins.
  • For each pixel and angle bin bb, the cumulative signed event count yields a relative log-intensity proxy:

Ie[b]exp(kbpkC)I_e[b] \propto \exp\Bigl( \sum_{k \in b} p_k C \Bigr)

  • These relative intensities, indexed by angle, substitute into the polarization estimation formulas:

ϕ=12arctan2(Ie[π/4]Ie[3π/4],Ie[0]Ie[π/2])\phi = \frac{1}{2} \arctan2(I_e[\pi/4]-I_e[3\pi/4],\, I_e[0]-I_e[\pi/2])

ρ=(Ie[0]Ie[π/2])2+(Ie[π/4]Ie[3π/4])22Iun\rho = \frac{ \sqrt{ (I_e[0] - I_e[\pi/2])^2 + (I_e[\pi/4] - I_e[3\pi/4])^2 } }{ 2 I_{\mathrm{un}} }

with Iun(Ie[0]+Ie[π/4]+Ie[π/2]+Ie[3π/4])/2I_{\mathrm{un}} \propto (I_e[0] + I_e[\pi/4] + I_e[\pi/2] + I_e[3\pi/4]) / 2.

  • Reconstruction of surface normals (θ,α)(\theta, \alpha) then proceeds by inverting the Fresnel equations, typically assuming specular surfaces.

In practice, using N=12N=12 angular bins improves robustness over the classic N=4N=4.

4. Reconstruction Pipelines: Physics-Based and Learning-Based Methods

The event-based SfP workflow can be categorized into two main approaches for estimating surface normals:

Physics-based event-SfP:

  1. Spin the polarizer continuously and record events for a specified interval.
  2. Angularly bin events as described above.
  3. Estimate relative intensity and polarization parameters per bin.
  4. Invert Fresnel models to recover normals per pixel.
  5. Main failure mode: "low fill-rate" regions where few or no events emerge, common in non-specular or low-texture areas.

Learning-based event-SfP:

To address performance degradation in low event-rate regions, a learning-based method is introduced:

  • Construct a Cumulative Voxel Grid Representation (CVGR): events are binned into BB sequential time/angle slices, then cumulatively summed.
  • The resulting tensor E(x,y,b)E(x,y,b) (e.g., H×W×8H \times W \times 8 for B=8B=8) encodes both polarity and temporal event structure.
  • A U-Net encoder–decoder regresses per-pixel surface normals, trained with cosine-similarity loss on both synthetic (photorealistic, Mitsuba-rendered, ESIM-simulated) and real datasets. No fine-tuning on real data is required due to strong cross-domain generalization observed.
  • Optional channel fusion with a standard polarization image (I(0)I(0)) increases robustness at extremely low event density.

5. Quantitative Performance, Speed, and Dataset Infrastructure

Event-based SfP is quantitatively evaluated on both synthetic and real datasets:

Metrics:

  • MAE (Mean Angular Error, degrees)
  • Accuracy@τ\,\tau (fraction of pixels below error threshold τ{11.25,22.5,30}\tau \in \{11.25^\circ, 22.5^\circ, 30^\circ\})
  • Fill-rate (fraction of pixels generating 1\geq1 event per polarizer revolution)

Results:

  • On synthetic data (512×\times512, 104 scenes): physics-based event-SfP achieves MAE 58.2\approx 58.2^\circ (vs. 62.5^\circ for best 12-frame classical), learning-based achieves MAE 27.9\approx 27.9^\circ (on par with image-based learning).
  • On real data (1280×\times720, 90 scenes, average fill-rate 3.6%\sim 3.6\%, C30C \sim 3050%50\%): physics-based event-SfP reduces MAE by 25%\sim25\% relative to best classical methods (from 56.3\sim56.372.572.5^\circ to 38.838.8^\circ), while learning-based event-SfP further reduces error by 52%52\% to 26.726.7^\circ, matching image-based learning (Muglikar et al., 2023).

Acquisition Speed:

  • Standard DoFP (Division-of-Focal-Plane) polarization cameras operate at 22\sim22 Hz (4 angles), while the event-polarizer setup attains an effective $50$ Hz or higher, with continuous angular coverage and no duty-cycle loss at full ($1$ MP) resolution. This definitively breaks the classic speed–resolution tradeoff in SfP.

Datasets:

  • ESfP-Synthetic: 104 sequences, 512×\times512, 12 angles, synthetic events and ground-truth normals.
  • ESfP-Real: 90 real scenes, 1280×\times720, fully aligned events and polarization images, ground-truth from event-based structured light, sub-pixel calibrated.

6. Limitations and Prospects for Future Research

Current state-of-the-art event-based SfP methods focus on specular-dominant surfaces (ρ\rho high), as diffuse materials yield very weak polarization signals, leading to low fill-rate at existing contrast thresholds (C30C \approx 3050%50\%). Several sensor nonidealities (background-leak events, pixel dead-time) increase noise at both low and high polarizer rotation speeds. Improved event-cameras with lower thresholds could enhance applicability to a wider range of materials.

Future extensions identified include:

  • Sensor-side improvements for lower thresholding
  • Hybrid event + frame depth fusion
  • Learned priors for Fresnel ambiguity resolution
  • Extension to outdoor or unstructured ("in-the-wild") environments
  • Multi-view polarization
  • Joint estimation of albedo and surface normals

These avenues are anticipated to address coverage in challenging scene conditions and further improve accuracy and robustness of 3D reconstruction from polarization cues (Muglikar et al., 2023).

7. Relationship to Broader Event-Based Sensing and Communication Models

While the above methods focus on physical scene understanding, the concept of event-based data streams also encompasses the modeling of point-processes in event-driven communication systems, as exemplified by the Self-Feeding Process (SFP) (Melo et al., 2014). The SFP provides a parsimonious generative model for inter-event times in communications, with a two-parameter point process and a closed-form stationary log-logistic marginal, lending insight into universal temporal statistics observed across event-based domains. While principally applied to communications, extensions to sensor networks suggest possible intersections for modeling event burstiness, anomaly detection, and stochasticity within sensor outputs—including those from event cameras used in SfP. A plausible implication is that understanding burst patterns and temporal correlations in sensor events could further enhance the robustness and interpretability of event-based vision pipelines.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Event-Based SfP.