Papers
Topics
Authors
Recent
Search
2000 character limit reached

Eyelid Angle (ELA): 3D Biometric Analysis

Updated 17 February 2026
  • Eyelid Angle (ELA) is a quantitative 3D biometric metric characterizing eye openness by extracting geometric features from eyelid surfaces.
  • ELA leverages fitted 3D plane normals to provide viewpoint invariance and lower variance compared to traditional 2D metrics like EAR.
  • ELA supports robust blink detection and synthetic data augmentation via Blender, enhancing driver state monitoring and ADAS research.

The Eyelid Angle (ELA) is a quantitative biometric metric characterizing eye openness, defined via the 3D geometry of eyelid surfaces as extracted from facial landmarks. The ELA metric enables robust, viewpoint-invariant quantification of eyelid motion, supporting precise blink detection and drowsiness monitoring in driver state analysis. Unlike the widely used Eye Aspect Ratio (EAR), ELA leverages fitted planes from 3D landmark constellations on the upper and lower eyelids, providing lower variance under head pose changes. The ELA framework further facilitates synthetic data generation by animating avatars in Blender to follow prescribed ELA signals, augmenting empirical datasets for advanced driver assistance system (ADAS) research and development (Wolter et al., 24 Nov 2025).

1. Geometric Formulation and Computation

ELA computation begins by extracting 3D facial landmarks using MediaPipe Face Mesh V2, which provides (x,y,z)(x, y, z) for 468 keypoints, including seven ordered along each eyelid. Coordinates are normalized per frame: image coordinates by width and height, depth rescaled as z=1.7×zrawz = 1.7 \times z_\text{raw}.

To represent the upper and lower eyelids, 3D points for each eyelid are collected into matrices Lu\mathbf{L}_u and Ll\mathbf{L}_l (3×n3 \times n each). Each set is zero-centered by subtracting its centroid, yielding A\mathbf{A}. Singular value decomposition (SVD) of A\mathbf{A} produces orthonormal bases, whose third column corresponds to the normal n\mathbf{n} of the best-fit eyelid plane. Normal orientation is regularized using the sign of n(A,i+1×A,i)\mathbf{n} \cdot (\mathbf{A}_{*,i+1} \times \mathbf{A}_{*,i}) to ensure consistency.

The raw eyelid angle for an eye is then computed as:

ELAraw=arccos(nunl)\mathrm{ELA}_\mathrm{raw} = \arccos(\mathbf{n}_u \cdot \mathbf{n}_l)

where nu\mathbf{n}_u and nl\mathbf{n}_l are the upper and lower eyelid plane normals.

When aggregating information from both eyes, yaw angle β\beta from the face’s 3D pose is used to weight ELA values via a sigmoid visibility function σ(x)=1/(1+ex)\sigma(x) = 1/(1+e^{-x}), yielding:

ELAcombined=σ(4β)ELAleft+σ(+4β)ELAright\mathrm{ELA}_\mathrm{combined} = \sigma(-4\beta)\,\mathrm{ELA}_\mathrm{left} + \sigma(+4\beta)\,\mathrm{ELA}_\mathrm{right}

This ensures the eye more directly facing the camera contributes more to the aggregate measure.

2. ELA versus Eye Aspect Ratio (EAR)

The Eye Aspect Ratio (EAR) defines openness as a 2D distance ratio:

EAR=p2p6+p3p52p1p4\mathrm{EAR} = \frac{ \|p_2-p_6\| + \|p_3-p_5\| }{ 2 \|p_1-p_4\| }

where p1,...,p6p_1, ..., p_6 are landmarks localized around the eye. The EAR is sensitive to perspective and diminishes in reliability under head rotation due to foreshortening.

ELA, in contrast, is derived from the angular relationship of fitted 3D planes, rendering it invariant to rigid facial rotations. Synthetic evaluation with the eyelid held at fixed ELA (6060^\circ) and the camera sweeping up to ±40\pm40^\circ in vertical or horizontal axes yielded a raw ELA mean absolute error (MAE) of 2.8^\circ (vertical) and 3.3^\circ (horizontal), while EAR varied by more than 10–15% under the same transformations. Visualization depicts ELA as having a near-flat response across viewpoint shifts, while EAR fluctuates substantially (see Fig. “ELAvsEAR” in (Wolter et al., 24 Nov 2025)).

The ELA-driven blink detection framework comprises several signal processing and statistical stages:

  • Post-processing: Raw ELA time series are filtered using a 1D Gaussian kernel (σ=FPS/30\sigma=\mathrm{FPS}/30).
  • Edge Detection: Calculate the temporal derivative d(t)=dELAfilt/dtd(t) = \mathrm{d} \, \mathrm{ELA}_{\rm filt} / \mathrm{d}t, and apply k=2k=2 k-means clustering to local extrema to separate “falling” (negative slope, m1m_1) from “rising” (positive slope, m2m_2) transitions.
  • Blink Windowing: The relevant blink interval is defined using local maxima/minima before and after identified extrema; tangents at these points intersect with minima to define closing (t1t2t_1 \to t_2), closed (t2t3t_2 \to t_3), and reopening (t3t4t_3 \to t_4) durations.
  • Feature Extraction: Table 1 in (Wolter et al., 24 Nov 2025) details the computed blink features:
Temporal Feature Mathematical Expression Description
Closing duration (d1d_1) t2t1t_2 - t_1 Time to close eyelid
Closed duration (d2d_2) t3t2t_3 - t_2 Time eyelid remains closed
Reopening duration (d3d_3) t4t3t_4 - t_3 Time to reopen eyelid
Amplitude ELAstartELAminELAstart\frac{\mathrm{ELA}_\mathrm{start} - \mathrm{ELA}_\mathrm{min}}{\mathrm{ELA}_\mathrm{start}} Openness delta
A/V ratio AmplitudemaxELA˙rising\frac{\mathrm{Amplitude}}{ \max \dot{\mathrm{ELA}}_\mathrm{rising}} Amplitude/velocity
Normalized area Area below reopening curve
PERCLOS Percent ELA <20< 20^\circ between blinks
Inter-blink interval Time between consecutive blinks

Blink detection employs rules to avoid merged/masked events, running analyses in 90 s windows updated every 60 s.

For drowsiness inference, a 10-NN classifier is trained on the means and standard deviations of blink features. PCA (5 components) serves as input for predicting alert versus drowsy states. ELA-derived features replicate classic findings that drowsiness is marked by increased closing (d1d_1), closed (d2d_2), and slower reopening durations [(Wolter et al., 24 Nov 2025), Fig. 5].

4. Synthetic Data Augmentation via Blender

ELA’s geometric definition enables automated eye animation for data augmentation. The methodology consists of:

  1. Blink Signal Synthesis: Blink durations (d1,d2,d3d_1, d_2, d_3) and inter-blink intervals (Δt\Delta t) are drawn from empirical distributions (Caffier et al., 2003): d1U(120,250)d_1\sim U(120,250) ms, d2U(50,100)d_2\sim U(50,100) ms, d3U(120,300)d_3\sim U(120,300) ms, ΔtN(5s,1s2)\Delta t \sim \mathcal{N}(5\text{s}, 1\text{s}^2) for alert; drowsy blinks are longer.
  2. Avatar Animation: A rigged Blender avatar (controlled by shape keys for eyelids) follows the constructed ELA waveform, interpolated using splines for physically plausible motion.
  3. Camera and Lighting Randomization: The virtual camera’s yaw and pitch are jittered according to normal distributions (yawN(0,5),pitchN(0,3)\mathrm{yaw}\sim \mathcal{N}(0, 5^\circ), \mathrm{pitch}\sim \mathcal{N}(0, 3^\circ)), with FOV and lighting varied as [±2,±20%][\pm2^\circ, \pm20\%].
  4. Noise Augmentation: Gaussian noise N(0,σn)\mathcal{N}(0,\sigma_n) is added to the ELA trajectory to emulate landmark jitter.
  5. Benchmarking: Constant ELA ground truths (0–70^\circ) are used to sweep orientation ±40\pm40^\circ and evaluate geometric error, revealing absolute ELA mean errors of 4–7^\circ for large angles and up to 18.3^\circ at 0^\circ (fully closed).

This pipeline provides scalable, controlled datasets for training and benchmarking drowsiness classifiers under varied conditions.

5. Experimental Evaluation in Driver Monitoring

Key empirical results on public datasets:

  • ELA versus EAR Stability: Over ±40\pm40^\circ view sweeps, EAR variance approaches 30%, while ELA’s maximum error remains <5<5^\circ (MAE 3\approx 3^\circ).
  • Accuracy by Angle: At set ELA ground truths (0–70^\circ), the MAE is highest at closed eyes (18.3^\circ at 0^\circ) and falls to 4–7^\circ for open eyes (50–70^\circ).
  • Blink Detection on DMD: Across 16 videos (5441 ± 183 frames each; 1578 labeled blinks), ELA-based detection achieved an accuracy (DA) of 89.4%.
  • Drowsiness Classification: On UTA-RLDD, multiclass (alert/low/drowsy) video-level accuracy was 52.5% (baseline with all features: 65.2%), with binary (alert vs. drowsy) accuracy at 80.4%.
  • Synthetic Data Impact: Training/testing on matched FPS (10, 30, 50 Hz) resulted in AC1 accuracies of 77%, 98%, and 92%, respectively; accuracy dropped to 69% or 46% on cross-rate data. Blink detection on synthetic data showed 51% DA at 10 Hz, improving to 95% at 30/50 Hz.

These results demonstrate ELA’s reproducibility, viewpoint invariance, and discriminatory power for eye openness and blink analytics. The metric also enables the creation and validation of large-scale synthetic datasets with precise parametric control for driver state monitoring.

6. Implementation Considerations and Availability

ELA code, Blender scenes, and dataset-generation scripts will be released as open-source resources contingent on paper acceptance, fostering reproducible research in driver monitoring and ADAS evaluation (Wolter et al., 24 Nov 2025). The inclusion of procedural camera and lighting diversifications, combined with physiologically plausible blink kinematics, supports generalization across real-world deployment contexts.

7. Significance and Research Directions

ELA addresses the limitations of 2D ocular metrics under variable camera viewpoints, supplying a stable, geometric foundation for both statistical learning and physically grounded simulation. Its integration with Blender facilitates targeted data augmentation, contributing to more robust and diverse datasets in driver monitoring research. A plausible implication is that ELA may underpin further advancements in explainable computer vision for human state analysis, especially where 3D information from monocular video can be reliably estimated or simulated. The framework also underscores the importance of aligning biometric signals and synthetic augmentation strategies with task-specific invariances, particularly for safety-critical ADAS deployments (Wolter et al., 24 Nov 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Eyelid Angle (ELA).