Papers
Topics
Authors
Recent
Search
2000 character limit reached

Generalized Cross-Entropy Loss

Updated 5 February 2026
  • Generalized cross-entropy loss is a family of loss functions derived from categorical cross-entropy, introducing tunable parameters to balance noise robustness with predictive accuracy.
  • These losses extend classical cross-entropy through variants like Box–Cox, comp-sum, and Rényi forms, enabling improved handling of label noise, class imbalance, and structured outputs.
  • Empirical studies demonstrate that optimal parameter tuning in generalized cross-entropy losses enhances model convergence, training stability, and overall generalization on noisy datasets.

Generalized cross-entropy loss refers to a class of loss functions that extend or interpolate the standard categorical cross-entropy (CCE) loss via the introduction of a tunable parameter or structural generalization. These losses provide enhanced robustness, flexibility, and tailored inductive biases for classification and related learning tasks, especially in settings with label noise, class imbalance, or structured outputs. Notable constructions include the parameterized loss of Zhang & Sabuncu (2018) (Zhang et al., 2018), the comp-sum GCE family (Mao et al., 2023), Rényi-type cross-entropy losses (Thierrin et al., 2022, Thierrin et al., 2022), t-norm–based generalizations (Giannini et al., 2019), and others. Each instantiates a distinct, mathematically principled trade-off between the classical behavior of CCE and alternative targets such as mean absolute error (MAE), robust entropic losses, or structured/cognizant surrogates.

1. Mathematical Formulations and Core Families

Several mathematically distinct but thematically related generalizations of cross-entropy loss are prevalent:

a. Box–Cox/Power Generalized Cross-Entropy (Lq\mathcal{L}_q)

For softmax outputs pΔc1p \in \Delta^{c-1} and one-hot target ej\mathbf{e}_j, the GCE loss is defined as

Lq(p,ej)=1pjqq,q(0,1].\mathcal{L}_q(p, \mathbf{e}_j) = \frac{1 - p_j^q}{q}, \quad q\in(0,1].

Limits recover CCE (q0q \to 0) and MAE (q=1q = 1) (Zhang et al., 2018).

b. Comp-Sum GCE (τGCE\ell_{\tau}^{\rm GCE})

For a score function hh, the comp-sum GCE loss is

τGCE(h,x,y)=11τ[(yeh(x,y)h(x,y))1τ1],\ell^{\rm GCE}_\tau(h,x,y) = \frac{1}{1-\tau}\left[\left(\sum_{y'} e^{h(x,y')-h(x,y)}\right)^{1-\tau} -1\right],

equivalent to the Box–Cox formulation under reparameterization, for τ(1,2)\tau\in(1,2) (Mao et al., 2023).

c. Rényi and Natural Rényi Cross-Entropy

For distributions PP, QQ and order α1\alpha\neq 1: Hα(PQ)=11αlnxP(x)Q(x)α1H_\alpha(P\|Q) = \frac{1}{1-\alpha}\ln\sum_x P(x) Q(x)^{\alpha-1}

H~α(PQ)=Dα(PQ)+Hα(P),Dα(PQ)=1α1lnxP(x)αQ(x)1α\widetilde{H}_\alpha(P\|Q) = D_\alpha(P\|Q) + H_\alpha(P),\quad D_\alpha(P\|Q) = \frac{1}{\alpha-1}\ln\sum_x P(x)^\alpha Q(x)^{1-\alpha}

(Thierrin et al., 2022, Thierrin et al., 2022).

d. T-norm Generator Losses

Losses derive from a chosen strictly decreasing generator gg associated with an Archimedean t-norm: Lg+(Xj,pj)=xXjg(pj(x))L_g^+(\mathcal{X}_j,p_j) = \sum_{x\in\mathcal{X}_j}g(p_j(x)) Examples include Schweizer–Sklar and Frank families, allowing interpolation between CCE and L1L_1-loss (Giannini et al., 2019).

e. f-divergence–Generated Cross-Entropy

For convex ff, Fenchel–Young loss: f(s,y;r)=softmaxf(s;r)+Df(yr)y,s\ell_f(s,y;r) = \textrm{softmax}_f(s;r) + D_f(y\|r) - \langle y, s\rangle Covers CCE, sparsemax, α-entmax, total variation, and more (Roulet et al., 30 Jan 2025).

2. Robustness to Noisy Labels and Theoretical Guarantees

Generalized cross-entropy losses exhibit varying noise-robustness:

  • Symmetry and tolerance: MAE is perfectly symmetric and robust to uniform noise. GCE losses with $0 < q < 1$ are not strictly symmetric, but have bounded deviations that offer controlled trade-offs. Explicit risk bounds under both uniform and class-dependent noise have been established (Zhang et al., 2018).
  • Comp-sum H-consistency: For comp-sum GCE, finite-sample non-asymptotic bounds exist relating excess surrogate risk to zero-one error, quantified by the “minimizability gap.” These gaps shrink as the loss becomes less sharp (higher τ), implying better robustness but mild accuracy trade-offs (Mao et al., 2023).

3. Gradient and Optimization Properties

The implicit weighting of examples induced by gradient forms is pivotal:

  • For Lq\mathcal{L}_q: gradient magnitude on the true class scales as pyq1p_y^{q-1}. As q0q\to 0, hard examples (low-confidence) are upweighted, which speeds up convergence but overfits noise. As q1q\to 1, all examples are equally weighted (MAE) (Zhang et al., 2018).
  • T-norm generators: The derivative g(pj(x))g'(p_j(x)) may diverge or vanish near pj(x)=0p_j(x)=0, providing explicit control over vanishing/exploding gradients and training stability. For cross-entropy (g(x)=logxg(x)=-\log x), the gradient never vanishes as x0x\to 0 (Giannini et al., 2019).
  • In comp-sum GCE, the O(√n) penalty for multiclass arises in the H-consistency constant, manifesting as a tradeoff with invariance to label noise (Mao et al., 2023).

4. Empirical Performance and Practical Recommendations

Empirical studies across deep architectures validate the impact of generalized cross-entropy:

  • On CIFAR-10/100 under uniform and class-dependent label noise, L0.7\mathcal{L}_{0.7} achieves test accuracies superior to both standard CCE and MAE (e.g., 87.1% for L0.7\mathcal{L}_{0.7} vs 81.9% for CCE at 40% noise) (Zhang et al., 2018).
  • With comp-sum GCE (τ∈(1,2)), accuracies interpolate between CCE and MAE, with τ=1.5 providing nearly optimal tradeoff (e.g., on CIFAR-10, τ=1.5 yields 92.0% vs 92.6% for logistic, but better robustness) (Mao et al., 2023).
  • For Rényi-type GCE, tuning α offers mode-seeking/mode-covering flexibility with demonstrable practical gains in GANs (α-GAN) and robust classification (Thierrin et al., 2022, Thierrin et al., 2022).
  • T-norm families (Schweizer–Sklar, Frank) permit continuous tuning of loss sharpness and gradient response, yielding optimal convergence for appropriate λ choices (Giannini et al., 2019).

5. Connections to Broader Generalization Frameworks

Generalized cross-entropy unifies and connects a broad swath of loss function design:

  • Relation to f-divergences: Nearly all generalizations above, including α-GCE, t-norm generator losses, and comp-sum forms, can be interpreted as parametrizations of f-divergence–based surrogates (Roulet et al., 30 Jan 2025).
  • Structured entropies: Structured cross-entropy and similarity-based surrogates extend the concept by adapting the loss surface to known output topology, class similarity, or hierarchical blocks, encoding richer inductive biases without loss of convexity or differentiability (Lucena, 2022, Kobs et al., 2020).
  • Losses for soft labels: For non-one-hot targets (e.g., in self-labeling or semi-supervised learning), generalizations such as collision cross-entropy (H2(y,p)=logkykpkH_2(y,p) = -\log \sum_k y_k p_k), which corresponds to capped Rényi order-2 cross-entropy, are provably superior (Zhang et al., 2023).

6. Algorithmic Implementation and Tuning

Generalized cross-entropy losses are practical:

  • The computation complexity is typically O(C)O(C) per sample, identical to CCE.
  • In Box–Cox GCE, the parameter qq is tuned via clean validation accuracy or validation gap under synthetic noise. Most robustness gains accrue for q[0.5,0.8]q \in [0.5,0.8] (Zhang et al., 2018).
  • For comp-sum and t-norm cases, hyperparameters (τ\tau, λ) are tuned via grid search or cross-validation, with empirical gains across a range of architectures and datasets (Mao et al., 2023, Giannini et al., 2019).
  • For Rényi-type losses, plug-and-play implementation is possible: replace CE with the closed-form formula and compute gradients by autodiff or explicit formulae (Thierrin et al., 2022, Thierrin et al., 2022).

7. Applications, Advantages, and Limitations

  • Advantages: Generalized cross-entropy losses offer improved learning stability, enhanced robustness to label noise, controllable gradient behavior, and flexible adaptation to output structure and semantic similarity.
  • Representative applications: Robust deep classification, GANs, deep clustering, ordinal/structured prediction, learning under label ambiguity, language modeling, and adversarial training (Zhang et al., 2018, Roulet et al., 30 Jan 2025, Thierrin et al., 2022, Zhang et al., 2023, Kobs et al., 2020, Lucena, 2022).
  • Limitations: Choice of generalization parameter requires careful tuning. For extreme parameter values, tradeoffs may result in slow convergence (e.g., MAE case), loose generalization bounds, or suboptimal accuracy on clean data (Zhang et al., 2018, Mao et al., 2023). Structured generalizations (e.g., class similarity, partitions) require reliable prior knowledge of output space relations (Lucena, 2022).

Summary Table: Generalized Cross-Entropy Variants

Family/Name Key Parameter(s) Limit/Key Behavior
Box–Cox GCE (Lq\mathcal{L}_q) q(0,1]q\in(0,1] CCE as q0q\to0, MAE as q=1q=1
Comp-sum GCE (τGCE\ell^{\rm GCE}_\tau) τ(1,2)\tau\in(1,2) Logistic loss (τ=1\tau=1), MAE (τ=2\tau=2)
Rényi Cross-Entropy α>0, α1\alpha>0,\ \alpha\neq1 CCE as α1\alpha\to1, tunable tail mode
T-norm Generator Family λ\lambda (Schw-Sklar, Frank) CCE (λ→0/1), L1L_1 (λ=1/∞)
Collision Cross-entropy – (order-2 fixed) Symmetric, robust under soft labels

Generalized cross-entropy losses represent a mathematically rigorous and empirically validated family of losses for robust, flexible, and structured deep learning. Adaptation of the key parameter(s) allows interpolation between canonical accuracy-optimized surrogates and losses that encode noise-tolerance, semantic proximity, or structured prior knowledge (Zhang et al., 2018, Mao et al., 2023, Thierrin et al., 2022, Thierrin et al., 2022, Giannini et al., 2019, Roulet et al., 30 Jan 2025, Lucena, 2022, Zhang et al., 2023, Kobs et al., 2020).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Generalized Cross-Entropy Loss.