Papers
Topics
Authors
Recent
Search
2000 character limit reached

Individual Differential Privacy (iDP)

Updated 26 January 2026
  • Individual Differential Privacy (iDP) is a framework that redefines privacy by measuring protection on a per-individual basis using local sensitivity rather than worst-case global bounds.
  • It uses noise calibration methods like Laplace and microaggregation to tailor protection to the actual data, substantially enhancing utility compared to standard differential privacy.
  • iDP finds applications in machine and federated learning where individual privacy budgets improve model accuracy, yet require careful design to counteract inference attacks.

Individual Differential Privacy (iDP) is an alternative formalization of differential privacy that shifts the quantification of privacy loss from a worst-case (over all possible neighboring datasets) to a data-specific or per-individual perspective. By relaxing the symmetry and universality of standard differential privacy (DP), iDP enables mechanisms to calibrate noise to the actual data or even to each participant's own privacy requirements, in many cases dramatically improving utility without weakening the individual-level guarantee. However, this relaxation brings subtle vulnerabilities and necessitates careful mechanism design, auditing, and deployment strategies to uphold protection against inference attacks.

1. Formal Definitions and Core Distinctions

Standard ε-differential privacy (ε-DP) requires that, for any two neighboring datasets D1,D2D_1, D_2 (differing in one individual), and any measurable event SS,

Pr[κ(D1)S]eϵPr[κ(D2)S]\Pr[\kappa(D_1) \in S] \leq e^{\epsilon} \Pr[\kappa(D_2) \in S]

where κ is the randomized release mechanism. This constraint is symmetric and applies to all possible neighbors—yielding a worst-case global guarantee (Soria-Comas et al., 2016).

By contrast, ε-individual differential privacy (ε-iDP) for a fixed dataset DD only compares DD with its immediate neighbors DD' (differing in one record), requiring

eϵPr[κ(D)S]Pr[κ(D)S]eϵPr[κ(D)S]e^{-\epsilon} \Pr[\kappa(D') \in S] \leq \Pr[\kappa(D)\in S] \leq e^{\epsilon}\Pr[\kappa(D') \in S]

for all SS and each DD'. The neighbor relation is “one-sided”—all neighborhoods revolve around the actual data DD (Soria-Comas et al., 2023, Soria-Comas et al., 2016, Protivash et al., 2022). This allows the calibration of noise to the local sensitivity for DD:

LSf(D)=maxD:d(D,D)=1f(D)f(D)1LS_f(D) = \max_{D' : d(D, D') = 1} \|f(D) - f(D')\|_1

rather than the global Δf=maxD,Df(D)f(D)1\Delta f = \max_{D,D'} \|f(D) - f(D')\|_1 required by standard DP.

Recent generalizations of iDP encompass per-instance DP, where the privacy guarantee is measured for a specific individual and dataset:

Pr[M(D)S]eϵi(D)Pr[M(Di{xi})S]+δi(D)\Pr[M(D) \in S] \leq e^{\epsilon_i(D)} \Pr[M(D_{-i} \cup \{x'_i\}) \in S] + \delta_i(D)

where Di{xi}D_{-i} \cup \{x'_i\} is the neighbor formed by replacing xix_i in DD (Wang, 2017).

iDP further subsumes “personalized DP,” where each participant specifies (ϵi,δi)(\epsilon_i, \delta_i) and the guarantee applies only to their data (Boenisch et al., 2022, Boenisch et al., 2023).

2. Mechanisms and Sensitivity Calibration

Local Sensitivity Mechanisms: The archetypal iDP mechanism is the Laplace (or Gaussian) mechanism, but with scale calibrated to LSf(D)LS_f(D):

Output: f(D)+(N1,...,Nk),NjLaplace(0,LSf(D)/ϵ)\text{Output: } f(D) + (N_1, ... , N_k), \quad N_j \sim Laplace(0, LS_f(D)/\epsilon)

By bounding the change f(D)f(D)f(D) - f(D') only for neighbors of DD, the variance of the added noise can be dramatically reduced, greatly enhancing utility for functions (e.g., medians, order statistics) where local sensitivity is much lower than global (Soria-Comas et al., 2016, Protivash et al., 2022, Soria-Comas et al., 2023).

Microaggregation-Based Preprocessing: For tabular data, sensitivity can be further lowered by microaggregation, which groups records into clusters, replaces them by centroids, and then applies noise only at the centroid level (Soria-Comas et al., 2023). The local sensitivity of a centroid cjac_j^a (for attribute AaA^a) is

LScja(D)=max{maxAaminxCjaxa,maxxCjaxaminAa}CjaLS_{c_j^a}(D) = \frac{\max\{\max A^a - \min_{x \in C_j^a} x^a, \max_{x \in C_j^a} x^a - \min A^a\}}{|C_j^a|}

and can be sharpened by careful replacement of extreme values inside clusters (cluster-based local sensitivity).

Feature-Level and Per-Instance Sensitivity: Individual- and feature-specific privacy analysis is possible via computation of per-individual sensitivity Δi(f)\Delta_i(f) (Cummings et al., 2018) and partial sensitivity measures such as

Δpj(f)(x)=fxj/f(x)2\Delta_p^j(f)(x) = \frac{\partial f}{\partial x_j} / \|\nabla f(x)\|_2

yielding allocations of privacy risk across input dimensions (Mueller et al., 2021).

3. Applications: Machine Learning, Federated Learning, Data Release

Differentially Private Learning with iDP: Classic DP training (uniform ε) constrains the entire system to the most privacy-sensitive participant. iDP relaxes this by assigning per-user (or per-group) (ϵi,δ)(\epsilon_i, \delta), permitting, e.g., adaptive sampling rates (SAMPLE), individualized clipping/noise scales (SCALE), and model aggregation weights (in PATE) matched to each participant's privacy budget (Boenisch et al., 2023, Boenisch et al., 2022, Lange et al., 29 Jan 2025).

Federated Learning: iDP is integrated into FL protocols by setting per-client sampling probabilities qiq_i (with fixed per-round noise) such that each client ii satisfies (ϵi,δ)(\epsilon_i,\delta) after II rounds (Lange et al., 29 Jan 2025). The privacy accounting leverages moments-based composition and per-group assignments, yielding higher utility by sampling more often from users with relaxed privacy preferences.

Label-Only and Embedding Release: iDP allows deployment of neural classifiers that identify input regions (“iDP-DB”) where deterministic 0-iDP can be certified. For points not certified, noise is selectively injected, preserving nearly full baseline accuracy (Kabaha et al., 23 Feb 2025). Embedding distributions can be protected by iDP-predictable mechanisms such as iDP-SignRP, which randomize only the bits susceptible to privacy leakage under local flips (Li et al., 2023).

4. Privacy-Utility Trade-offs and Empirical Performance

By lowering the required noise, iDP mechanisms empirically outperform standard DP in data release (mean/median queries, masked tabular data), deep classifier deployment, image retrieval, and SVM learning. For instance, iDP-protected microaggregated releases yield signal-to-noise ratios orders of magnitude higher than global-sensitivity-masked DP at equivalent ε (Soria-Comas et al., 2023). In FL and iDP-SGD, using individualized budgets boosts test accuracy by 2–5% over uniform-DP baselines on MNIST, SVHN, FMNIST, and CIFAR-10 (Boenisch et al., 2023, Lange et al., 29 Jan 2025). iDP-SignRP achieves near-private accuracy at ϵ=0.1\epsilon = 0.1, far below the ϵ>5\epsilon > 5 regime required for traditional methods (Li et al., 2023).

Mechanism Privacy Calib. Utility Benefit (ε ≪ 1) Limitation
Standard DP Global sensitivity (Δ) Very high noise, low utility Overprotects outliers
iDP-Laplace Local sensitivity (LS(D)) Orders-of-mag. lower noise Vulnerable to attacks
iDP-Microagg. Cluster LS / CBLS Similar to DP at ε ∼ 10× lower Trust in clustering
iDP-SignRP Sparse flip/noise Full utility at ε < 0.5 Fixed-data only
Adaptive iDP-SGD Per-participant accounting Up to 5% accuracy gain Budget interactions

5. Composition, Post-Processing, and Advanced Privacy Accounting

iDP mechanisms inherit key properties of standard DP: post-processing invariance (no further function on releases can inflate privacy loss) and sequential composition (privacy losses add over releases to the same DD) (Soria-Comas et al., 2016, Wang, 2017, Protivash et al., 2022). Adapted moments and RDP accountants provide exact per-individual privacy tracking even under adaptive composition (e.g., through personalized Rényi filters), which enable conservative yet efficient budget exhaustion checks and facilitate adaptive release protocols (Feldman et al., 2020, Koskela et al., 2022).

6. Limitations, Vulnerabilities, and Mitigation

The principal vulnerability of iDP arises from its “empirical neighbor” model: tuning noise to local sensitivity reveals data-dependent information. Attackers can exploit the detectability of noise magnitude or the structure of masked queries to reconstruct the underlying dataset at negligible privacy cost—“reconstruction attacks” are particularly devastating when the mechanism’s output distribution varies deterministically with local data (Protivash et al., 2022). In federated and centralized machine learning contexts, sampling-based iDP implementations are susceptible to “excess risk” stemming from the coupling of privacy budgets: an individual’s risk depends nontrivially on other users’ privacy choices, and adversaries may deliberately adjust budget assignment to inflate the susceptibility of targets (Kaiser et al., 19 Jan 2026).

Mitigation strategies include:

  • Imposing joint contracts that add a Δ-divergence upper bound (εi,δi,Δ)(\varepsilon_i,\delta_i,\Delta) to cap excess risk (Kaiser et al., 19 Jan 2026).
  • “Smooth sensitivity” wrappers to obscure the data-dependence of noise (Soria-Comas et al., 2016, Protivash et al., 2022).
  • Mechanism-level auditing, e.g., privacy profiles or ff-DP trade-off visualizations, to detect deviations from the nominal individual guarantee (Kaiser et al., 19 Jan 2026).
  • Restricting the exposure of local sensitivity or noise parameters to outputs.
  • Careful system design—e.g., padding batch size, decoupling noise and sampling rates, and grouping budgets—to constrain the collective effects of heterogeneous policies (Lange et al., 29 Jan 2025, Boenisch et al., 2023).

7. Extensions, Future Directions, and Open Questions

Research on iDP is converging along several axes:

  • Mechanism generalization: Beyond additive mechanisms, extending iDP to exponential mechanisms, complex queries, and compositional data analysis flows (e.g., per-instance GDP/PLD frameworks) (Koskela et al., 2022, Wang, 2017).
  • Feature- and attribute-level iDP: Leveraging partial sensitivity/PLIS for per-feature privacy budgeting or targeted heterogeneous noise (Mueller et al., 2021, Mueller et al., 2022).
  • Personalized DP and privacy markets: Allocating risk/reward per individual for data markets and federated analytics (Boenisch et al., 2022, Cummings et al., 2018).
  • Robust deployment: Interactive interfaces and policy auditing to align individual understanding of privacy with the technical implications of collective budget selection (Kaiser et al., 19 Jan 2026).
  • Security against collusion: Defensive mechanisms and contract design to immunize iDP implementations against adversarial manipulation of peer budgets (Kaiser et al., 19 Jan 2026).

Unresolved open areas include: principled intermediate neighbor sets trading off utility and group privacy, safe release of data-driven noise magnitudes, and full compositional auditing of iDP-based systems in adversarial deployment environments.


References:

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Individual Differential Privacy (iDP).