Assess performance of projection-based training under alternative metrics

Determine whether the performance pattern observed in the synthetic learning study—namely, that training with targets defined by the Kullback–Leibler projection of the model’s prediction onto the admissible set F^{box}(π) induced by the possibilistic annotation π yields higher test performance than using the fixed antipignistic probability target—also holds when evaluated with metrics other than top‑1 accuracy, specifically negative log-likelihood, Brier score, and measures of constraint satisfaction with respect to the linear constraint sets C_i that define F^{box}(π).

Background

The paper introduces a projection-based training objective for multi-class classification with possibilistic supervision: for each instance, the model’s predictive probability vector is projected (in Kullback–Leibler sense) onto a convex admissible set F{box}(π) defined by dominance constraints from the possibility/necessity measures and linear shape (gap) constraints preserving the qualitative order of the possibility distribution π.

In the synthetic experiments, Model A (projection-based target) is compared to Model B (fixed target given by the antipignistic probability derived from π) using top-1 accuracy. The results show that, particularly under ambiguous supervision and limited training data, the projection-based objective tends to yield higher test accuracy.

The authors note that only top-1 accuracy was used for evaluation and explicitly pose the question of whether the same advantage persists under other metrics such as negative log-likelihood, Brier score, and constraint-satisfaction measures tied to the defining constraints C_i of F{box}(π).

References

We focus on top-$1$ accuracy as the primary evaluation metric in this study. Assessing whether the same pattern holds for other criteria (e.g., negative log-likelihood , Brier score , or measures of constraint satisfaction with respect to the constraint sets $C_i$ defining $\mathcal F{\mathrm{box}$) requires additional experiments and is left for future work.

Probabilistic classification from possibilistic data: computing Kullback-Leibler projection with a possibility distribution  (2604.01939 - Baaj et al., 2 Apr 2026) in Subsection "Results" in Section 5.2 (Learning from possibilistic supervision on synthetic data)