Differentiable Conformal Layer
- Differentiable conformal layer is a modular component that combines geometric priors with fixed statistical rules to achieve precise uncertainty quantification.
- It employs inverse sensor models and Dempster’s rule for evidential fusion, ensuring monotonic convergence and sharp boundary delineation.
- Applications in autonomous occupancy mapping and protein structure prediction demonstrate its effectiveness in maintaining calibrated, structure-aware uncertainty profiles.
A geometric evidential head is a modular component used for uncertainty quantification in structured prediction and evidential mapping, distinguished by its explicit encoding of geometric relationships and its generation of probability masses or parametric uncertainty distributions through fixed rules or hand-crafted mechanisms. The geometric evidential head operates independently of machine-learned priors, leveraging explicit sensor or spatial models to assign and update evidence in accordance with domain-specific constraints. This architecture is central to evidential occupancy mapping for autonomous perception systems (Bauer et al., 2020), and to @@@@1@@@@ with structure-aware guarantees in biomolecular modeling (Shihab et al., 12 Jan 2026).
1. Mathematical Formulation in Occupancy Mapping
In evidential occupancy frameworks, the geometric evidential head employs an inverse sensor model (ISM) to discretize the world into grid cells and assign uncertainty masses based on individual sensor detections. For each cell , the evidence is represented as a mass vector: where , , and denote masses for "free," "occupied," and "unknown," respectively.
The assignment utilizes geometric primitives:
- If a cell is within the beam and before the detection, assign , , .
- For the cell containing the detection, set , , .
- Cells outside the beam receive .
Example parameterization for radar:
This process is repeated over sensor sweeps, producing high-precision allocation of masses along the beam and detection surface (Bauer et al., 2020).
2. Evidential Update and Fusion
Sequential fusion of evidential masses occurs under independence assumptions, using combinatorial rules:
- Dempster’s rule: where .
- Yager’s rule for learning-based masses:
Geometric heads exclusively use Dempster’s rule, guaranteeing monotonic reduction of the unknown mass and convergence to certainty in regions with dense detections (Bauer et al., 2020).
3. Lower-Bound Certainty Control
Fusion with learned priors, such as deep ISMs, is regulated via the scalar discount factor : with
where , is a tuning rate, and is a user-specified lower bound.
This mechanism ensures that once geometric evidence has reduced below , the learned prior cannot reverse the geometric allocation, thereby maintaining geometric consistency at occupancy boundaries (Bauer et al., 2020).
4. Geometric Evidential Head in Graph-Based Uncertainty Quantification
In CalPro for protein structure prediction, the geometric evidential head outputs Normal–Inverse–Gamma (NIG) distributions per residue node using a graph neural network:
- Predictive mean
- Mean strength
- Inverse-Gamma shape ()
- Inverse-Gamma scale ()
Uncertainty decomposition:
- Aleatoric variance:
- Epistemic variance:
- Total uncertainty:
The geometric head, implemented via message passing over a residue graph (nodes: residues; edges: spatial proximity or sequence adjacency), incorporates backbone features, geometric features, and bio-prior annotations. Predictive outputs are regularized to avoid degenerate certainty allocations (Shihab et al., 12 Jan 2026).
5. Convergence Properties
The geometric evidential head yields analytical convergence guarantees under repeated geometric fusion:
- Under Dempster’s rule, the unknown mass is strictly monotonically non-increasing for any cell observed by sensor beams,
- Occupied cells: accumulates with each hit (), converging to unity.
- Free-space cells: cone-based mechanism ensures .
- The occupied/free boundary approaches a sharp step function in the limit.
- Geometric fusion is associative and idempotent; repeated identical evidence ensures finite-time convergence over revisited regions.
- Conflicts between learned priors and geometric allocations (via Yager’s fusion) are stored in the unknown mass and then resolved by subsequent geometric head inputs without violating the lower-bound constraint (Bauer et al., 2020).
6. Empirical Isolation and Functional Impact
Ablation studies in CalPro isolate the contribution of the geometric evidential head:
- Removing the evidential NIG head increases interval width and calibration error (ECE 0.025 vs. 0.021), indicating loss of sharp uncertainty delineation.
- Without geometric evidential mass assignments, coverage statistics are less robust and interval sharpness degrades.
- The geometric head allows precise localization of sharp transitions between high-certainty and high-uncertainty regions.
- In protein structure applications, geometric priors induce structure-aware uncertainty boosts in regions annotated by domain knowledge (e.g., disorder, flexibility).
This suggests that the geometric evidential head is essential for maintaining sharp, interpretable uncertainty profiles, rapid convergence to well-defined boundaries, and reliable local coverage in structured prediction tasks (Shihab et al., 12 Jan 2026).
7. Synthesis and Applications
Geometric evidential heads instantiate sensor- or structure-based priors in evidential architectures across autonomous perception and biological modeling:
- In occupancy mapping, they guarantee monotonic convergence to ground-truth boundaries, counterbalancing data-driven priors from deep ISMs for rapid yet precise environmental modeling (Bauer et al., 2020).
- In protein structure prediction, they produce NIG-based uncertainty distributions that are regularized by geometry and domain priors, maintaining finite-sample coverage under distribution shift (Shihab et al., 12 Jan 2026).
A plausible implication is that geometric evidential heads serve as localized, interpretable uncertainty modules, anchoring model predictions to explicit domain knowledge and robust convergence principles, and are indispensable in architectures where precise boundary demarcation and coverage calibration are necessary for downstream decision-making.