Approximating Optimal Local Losses L_k^* for EAGLE

Develop more elaborate strategies to approximate the optimal local losses L_k^* for each client—defined as the minimum of the client’s empirical loss L_k over the shared hypothesis space—required by the EAGLE algorithm’s loss-gap parity regularization in federated learning, particularly in settings with nonconvex models where simple heuristic convergence checks may fail to reach the true optimum.

Background

EAGLE regularizes the standard federated objective with a term that penalizes the variance of client loss gaps r_k(θ) = L_k(θ) − L_k*, which requires knowledge of the optimal local losses L_k* for each client. In practice, these optimal local losses are not directly available.

The authors approximate L_k* by running local training with stopping criteria and note that this heuristic is not guaranteed to reach the true optimum, especially in nonconvex settings. They explicitly leave the design of more elaborate approximation strategies as future work, highlighting an unresolved methodological need for accurate and practical estimation of L_k*.

References

We leave more elaborate strategies to approximate $L_k*$ for future work.

Loss Gap Parity for Fairness in Heterogeneous Federated Learning  (2603.29818 - Erraji et al., 31 Mar 2026) in Appendix, Additional Experimental Details and Results, Subsection “Approximation of L_k^*”