Papers
Topics
Authors
Recent
Search
2000 character limit reached

Federated Unlearning for FL Systems

Updated 6 February 2026
  • Federated Unlearning is a set of protocols that enable the selective removal of data from models trained across distributed federated learning systems.
  • FU methods support client-, sample-, and class-level data removal, helping meet compliance with privacy regulations like GDPR and CCPA while mitigating backdoor poisoning.
  • Key evaluation metrics include the epsilon-unlearning guarantee, accuracy drop (ΔA), and efficiency gains compared to retraining models from scratch.

Federated Unlearning (FU) is a collection of methodologies and protocols developed to enable the selective removal of data—at the sample, class, or client level—from models trained in distributed federated learning (FL) systems. FU formalizes the “right to be forgotten” in FL contexts, where a central server and multiple clients collaboratively train a model without exchanging raw data, by allowing clients to request that previously contributed data be “forgotten” such that the resulting model is statistically indistinguishable from one retrained from scratch on the remaining data (Zhao et al., 2023). This capability is central to compliance with data-protection regulations such as GDPR and the California CCPA, as well as for responding to poisoning backdoors, fairness correction, and removal of sensitive features. FU faces unique technical, system, and regulatory challenges due to the privacy-preserving and distributed nature of FL.

1. Foundations and Formal Definitions

Federated Unlearning generalizes the notion of machine unlearning from centralized machine learning to FL, addressing the distributed data setting. The central goal is, given a global model θ\theta trained including data SS (to be unlearned), to produce an updated global model θu\theta^u such that

Pr(Mθ(DS))Pr(Mθu(D)),\Pr(M_{\theta^*}( \mathcal{D} \setminus S )) \approx \Pr(M_{\theta^u}( \mathcal{D} )),

where θ\theta^* is trained from scratch without SS and θu\theta^u is generated by an efficient FU protocol (Zhao et al., 2023). FU requests can be made at multiple granularities:

  • Client-Level: Completely removing one or more clients' contributions.
  • Sample-Level: Erasing specific data points within a client's local dataset.
  • Class-Level: Targeting all examples of a particular class system-wide.

In the standard FL context, KK clients each hold local datasets Dk\mathcal{D}_k, and collaboratively update the global model via rounds in which the server broadcasts parameters, clients perform local updates, and then the server aggregates updates (Zhao et al., 2023, Romandini et al., 2024).

The design and evaluation of FU methods center on three primary dimensions:

  • Privacy/Unlearning Guarantee: Quantified by an ϵ\epsilon-unlearning guarantee; ideally, the log-likelihood ratio between pre- and post-unlearning models is bounded by ϵ\epsilon.
  • Accuracy/Utility: The difference in test accuracy or task loss before and after unlearning, ΔA=AbeforeAafter\Delta A = A_\text{before} - A_\text{after}.
  • Efficiency: Computational, storage, and communication overhead compared to (re)training from scratch (Zhao et al., 2023, Romandini et al., 2024).

2. Taxonomy of Federated Unlearning Methodologies

FU strategies

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Federated Unlearning (FU).