Papers
Topics
Authors
Recent
Search
2000 character limit reached

Generalized Equilibrium Propagation

Updated 5 February 2026
  • Generalized Equilibrium Propagation is an extension of classical EP that unifies multiple perturbative learning schemes through a local two-phase update.
  • It employs bias-cancelled estimators and continual local updates to manage non-conservative vector fields and weight asymmetry in neural systems.
  • Empirical results demonstrate competitive performance with BPTT in deep, oscillator, and physical network implementations.

Generalized Equilibrium Propagation (GEP) refers to a broad extension of the classical Equilibrium Propagation (EP) algorithm, encompassing both theoretical generalizations (e.g., to non-conservative and time-dependent dynamical systems, vector fields, Lagrangian/Hamiltonian mechanics) and practical refinements (e.g., bias-cancelled estimators, continual and local updates, scalable deep architectures). It unifies diverse perturbative learning schemes for dynamical systems under a principled framework, retains a strictly local two-phase update, and provides rigorous connections to backpropagation and physical implementability.

1. Mathematical Framework of Generalized Equilibrium Propagation

Generalized Equilibrium Propagation extends the canonical two-phase learning procedure to a dynamical system described by the state variable s∈Rns\in\mathbb{R}^n and parameters θ\theta, governed by an energy (or more broadly, a scalar potential or vector field) Φ(x,s,θ)\Phi(x,s,\theta):

  • Free phase: System relaxes to equilibrium according to

st+1=∂Φ∂s(x,st,θ)s_{t+1} = \frac{\partial \Phi}{\partial s}(x, s_t, \theta)

until convergence s∗=∂Φ/∂s(x,s∗,θ)s_*=\partial\Phi/\partial s(x,s_*,\theta).

  • Nudged phase: A weak output error is imposed via a small parameter β\beta, producing perturbed dynamics

st+1β=∂Φ∂s(x,stβ,θ)−β∂ℓ∂s(y^tβ,y)s_{t+1}^{\beta} = \frac{\partial \Phi}{\partial s}(x, s_t^{\beta}, \theta) - \beta \frac{\partial\ell}{\partial s}(\hat{y}_t^{\beta}, y)

leading to the nudged fixed point s∗βs_*^{\beta}.

  • Parameter update: The standard EP update is

Δθ=1β[∂Φ∂θ(x,s∗β,θ)−∂Φ∂θ(x,s∗,θ)]\Delta\theta = \frac{1}{\beta}\left[\frac{\partial \Phi}{\partial \theta}(x, s_*^{\beta}, \theta) - \frac{\partial \Phi}{\partial \theta}(x, s_*, \theta)\right]

where â„“(y^,y)\ell(\hat{y}, y) is the output loss.

Generalization occurs at several levels:

  • Bias correction: Use symmetric-difference estimators to suppress O(β)O(\beta) finite-nudge bias,

Δθ=12β[∂Φ∂θ(x,s∗β,θ)−∂Φ∂θ(x,s∗−β,θ)]\Delta\theta = \frac{1}{2\beta}\left[\frac{\partial \Phi}{\partial \theta}(x, s_*^{\beta}, \theta) - \frac{\partial \Phi}{\partial \theta}(x, s_*^{-\beta}, \theta)\right]

yielding O(β2)O(\beta^2)-accurate gradients (Laborieux et al., 2020, Laborieux et al., 2021).

2. Algorithmic Structure and Scalability

Symmetric Nudging and Bias Cancellation

Finite-nudge estimators (β\beta finite but non-infinitesimal) are subject to O(β)O(\beta) bias, which is significant in deep or highly nonlinear networks. The symmetric-difference estimator cancels this bias:

∇^EPsym(β)=12β(∂θΦ(x,s∗β,θ)−∂θΦ(x,s∗−β,θ))\widehat{\nabla}^{\rm EP_{sym}}(\beta) = \frac{1}{2\beta} \left(\partial_\theta \Phi(x,s_*^\beta,\theta) - \partial_\theta \Phi(x,s_*^{-\beta},\theta) \right)

This estimator is second-order accurate in β\beta and allows scaling EP to deep convolutional architectures with performance approaching Backpropagation Through Time (BPTT) (Laborieux et al., 2021, Laborieux et al., 2020).

Continual and Local Learning

Continual Equilibrium Propagation (C-EP) updates weights concurrently with neural state updates during the nudged phase:

θt+1β,η=θtβ,η+ηβ[∂Φ∂θ(x,st+1β,η,θtβ,η)−∂Φ∂θ(x,stβ,η,θtβ,η)]\theta_{t+1}^{\beta,\eta} = \theta_t^{\beta,\eta} + \frac{\eta}{\beta}\left[\frac{\partial \Phi}{\partial \theta}(x, s_{t+1}^{\beta,\eta}, \theta_t^{\beta,\eta}) - \frac{\partial \Phi}{\partial \theta}(x, s_t^{\beta,\eta}, \theta_t^{\beta,\eta})\right]

This enables full time-locality: synaptic adjustments require only present local variables, with no need to store free-phase activities (Ernoult et al., 2020, Ernoult et al., 2020).

Handling Weight Asymmetry

Generalized EP handles networks with asymmetric weights using augmented or corrected dynamics. Bias from non-reciprocal Jacobians is systematically penalized or compensated, e.g., via a "Jacobian-homeostasis" term that drives the Jacobian toward symmetry, reducing the bias between the EP "neuronal error vector" and the backprop error (Laborieux et al., 2023, Scurria et al., 3 Feb 2026).

3. Extensions to Broader Dynamical Regimes

Vector Field and Non-Conservative Systems

Generalized EP includes systems where dynamics are not derivable from an energy:

s˙=F(s,θ,x)\dot{s} = F(s, \theta, x)

The two-phase update (free and nudged phase) still provides a local learning signal, and its deviation from the true gradient is analytically controlled by the antisymmetry of JF=∂F/∂sJ_F = \partial F / \partial s. Exact gradient descent is recovered with antisymmetric corrections in the learning phase or by embedding the system in an augmented "Dyadic" space (Scellier et al., 2018, Scurria et al., 3 Feb 2026).

Lagrangian and Hamiltonian Dynamical Systems

Generalized Lagrangian Equilibrium Propagation (GLEP) and its Hamiltonian Echo Learning (HEL) equivalent enable application to systems governed by Lagrangians, including those with time-varying inputs or boundary conditions:

  • Learning is performed by extremizing an action S[q]=∫L(q,qË™,t) dtS[q]=\int L(q,\dot{q},t)\,dt and comparing the response to weakly nudged cost terms.
  • The learning gradient can be written as a finite-difference of on-shell action derivatives or as difference integrals of the Hamiltonian/Lagrangian with respect to parameters (Massar, 12 May 2025, Pourcel et al., 6 Jun 2025).

Oscillator, Physical, and Mixed-State Networks

EP has been generalized to encompass oscillator networks (e.g., Kuramoto models), resistive/memristive physical substrates, and driven-dissipative or quantum systems, provided a Lyapunov-like structure or local energy exists at least approximately (Rageau et al., 16 Apr 2025, Lin et al., 3 Feb 2026, Sajnok et al., 17 Oct 2025, Massar et al., 2024):

  • In oscillator networks, phase energy and amplitude-phase energy serve as Lyapunov functions, and the two-phase protocol optimizes both task objective and synchronization in the presence of frequency dispersion.
  • In physical resistive networks, the GEP framework yields both contrastive (classical EP-style) and exact analytical (projector-based) gradient estimators using only local electrical measurements.

4. Practical Performance and Empirical Results

  • Deep vision architectures: With symmetric-difference correction and cross-entropy loss, GEP achieves test error of 11.7% on CIFAR-10 in deep ConvNets (almost matching BPTT at 11.1%), drastically outperforming naive one-sided EP (86%) (Laborieux et al., 2020, Laborieux et al., 2021).
  • Residual and deep networks: Hopfield-Resnet architectures using clipped ReLU and skip connections scale GEP to 13 layers on CIFAR-10 (93.9% accuracy), closing the gap with backprop-trained ResNets (P et al., 30 Sep 2025).
  • Sequence and attention models: GEP combined with energy-based attention mechanisms (e.g., Hopfield layers) extends EP to sequence learning, enabling competitive performance in NLP benchmarks (Bal et al., 2022).
  • Oscillator and physical hardware: GEP-trained oscillator networks achieve 97.8% accuracy on MNIST, robust to noise and frequency disorder (Rageau et al., 16 Apr 2025). Resistive network simulators show matched or improved stability using projector-based (analytical) versus classical contrastive updating (Lin et al., 3 Feb 2026).
  • Non-conservative/weight-asymmetric settings: Asymmetric EP outperforms vector-field and naive EP in feedforward/asymmetric Hopfield networks, maintaining test accuracy above 92–95% even with strong asymmetry (Scurria et al., 3 Feb 2026, Laborieux et al., 2023).

5. Theoretical Properties and Convergence

  • Gradient consistency and BPTT equivalence: In the limit β→0\beta\to 0 (small-nudge) and for (approximate) symmetry, GEP local updates converge to the true gradient as computed by BPTT (Laborieux et al., 2020, Laborieux et al., 2021, Ernoult et al., 2020).
  • Bias control: Centered estimators and holomorphic (complex-nudge) EP permit explicit control and removal of estimator bias, even in non-symmetric systems (Laborieux et al., 2023).
  • Thermal, stochastic, and quantum regimes: GEP at finite temperature and/or in quantum settings yields exact objective gradients as path-integrals (covariances) or free energy differences between free and nudged Gibbs distributions—this enables robust learning with finite and even strong nudges, without the small-β\beta approximation (Litman, 27 Nov 2025, Massar et al., 2024).
  • Condition for validity: For all instances, differentiability of the governing dynamics and existence/uniqueness of free/nudged equilibria or extremal trajectories is required for the cross-derivative argument to hold.

6. Architectural and Application Scope

Setting Governing Rule Empirical Results
Deep ConvNets Discrete symmetric-nudge GEP 11.7% CIFAR-10, matches BPTT
Residual/skip architectures Hopfield-ResNet + clipped ReLU + GEP 93.9% CIFAR-10, 71.1% CIFAR-100
Convergent RNNs + Attention EP with modern Hopfield layer SOTA in biologically plausible NLP
Asymmetric/Non-conservative AEP, Dyadic-EP >92% MNIST with high asymmetry
Physical/Resistive Networks Projector-based local analytical GEP >90% breast cancer, stable updates
Oscillator networks Kuramoto/Amplitude-phase GEP 97.8% MNIST, robust to disorder

GEP enables local, scalable, and physically implementable training across all these domains (Laborieux et al., 2020, Laborieux et al., 2021, Ernoult et al., 2020, P et al., 30 Sep 2025, Bal et al., 2022, Lin et al., 3 Feb 2026, Scurria et al., 3 Feb 2026, Rageau et al., 16 Apr 2025, Litman, 27 Nov 2025, Laborieux et al., 2023).

7. Significance and Future Directions

Generalized Equilibrium Propagation unifies contrastive two-phase Hebbian learning, Backpropagation Through Time, and a spectrum of energy-based and vector-field approaches under a common framework. It provides:

  • Rigorous local learning for deep, recurrent, and time-varying neural dynamical systems.
  • Hardware compatibility for neuromorphic, analog, and emerging computing substrates.
  • Systematic bias control and asymmetry compensation protocols for both simulation and physical implementation environments.
  • Scalable, high-performance alternatives to backpropagation for supervised, temporal, and physically embedded learning tasks.

Ongoing research continues to expand GEP's algorithmic and physical reach, with particular focus on robust analog hardware realization, stochastic and quantum regimes, and large-scale, non-reciprocal systems (Litman, 27 Nov 2025, Pourcel et al., 6 Jun 2025, Massar et al., 2024, Lin et al., 3 Feb 2026, Scurria et al., 3 Feb 2026).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Generalized Equilibrium Propagation.