Individual Differential Privacy (iDP)
- Individual Differential Privacy (iDP) is a framework that redefines privacy by measuring protection on a per-individual basis using local sensitivity rather than worst-case global bounds.
- It uses noise calibration methods like Laplace and microaggregation to tailor protection to the actual data, substantially enhancing utility compared to standard differential privacy.
- iDP finds applications in machine and federated learning where individual privacy budgets improve model accuracy, yet require careful design to counteract inference attacks.
Individual Differential Privacy (iDP) is an alternative formalization of differential privacy that shifts the quantification of privacy loss from a worst-case (over all possible neighboring datasets) to a data-specific or per-individual perspective. By relaxing the symmetry and universality of standard differential privacy (DP), iDP enables mechanisms to calibrate noise to the actual data or even to each participant's own privacy requirements, in many cases dramatically improving utility without weakening the individual-level guarantee. However, this relaxation brings subtle vulnerabilities and necessitates careful mechanism design, auditing, and deployment strategies to uphold protection against inference attacks.
1. Formal Definitions and Core Distinctions
Standard ε-differential privacy (ε-DP) requires that, for any two neighboring datasets (differing in one individual), and any measurable event ,
where κ is the randomized release mechanism. This constraint is symmetric and applies to all possible neighbors—yielding a worst-case global guarantee (Soria-Comas et al., 2016).
By contrast, ε-individual differential privacy (ε-iDP) for a fixed dataset only compares with its immediate neighbors (differing in one record), requiring
for all and each . The neighbor relation is “one-sided”—all neighborhoods revolve around the actual data (Soria-Comas et al., 2023, Soria-Comas et al., 2016, Protivash et al., 2022). This allows the calibration of noise to the local sensitivity for :
rather than the global required by standard DP.
Recent generalizations of iDP encompass per-instance DP, where the privacy guarantee is measured for a specific individual and dataset:
where is the neighbor formed by replacing in (Wang, 2017).
iDP further subsumes “personalized DP,” where each participant specifies and the guarantee applies only to their data (Boenisch et al., 2022, Boenisch et al., 2023).
2. Mechanisms and Sensitivity Calibration
Local Sensitivity Mechanisms: The archetypal iDP mechanism is the Laplace (or Gaussian) mechanism, but with scale calibrated to :
By bounding the change only for neighbors of , the variance of the added noise can be dramatically reduced, greatly enhancing utility for functions (e.g., medians, order statistics) where local sensitivity is much lower than global (Soria-Comas et al., 2016, Protivash et al., 2022, Soria-Comas et al., 2023).
Microaggregation-Based Preprocessing: For tabular data, sensitivity can be further lowered by microaggregation, which groups records into clusters, replaces them by centroids, and then applies noise only at the centroid level (Soria-Comas et al., 2023). The local sensitivity of a centroid (for attribute ) is
and can be sharpened by careful replacement of extreme values inside clusters (cluster-based local sensitivity).
Feature-Level and Per-Instance Sensitivity: Individual- and feature-specific privacy analysis is possible via computation of per-individual sensitivity (Cummings et al., 2018) and partial sensitivity measures such as
yielding allocations of privacy risk across input dimensions (Mueller et al., 2021).
3. Applications: Machine Learning, Federated Learning, Data Release
Differentially Private Learning with iDP: Classic DP training (uniform ε) constrains the entire system to the most privacy-sensitive participant. iDP relaxes this by assigning per-user (or per-group) , permitting, e.g., adaptive sampling rates (SAMPLE), individualized clipping/noise scales (SCALE), and model aggregation weights (in PATE) matched to each participant's privacy budget (Boenisch et al., 2023, Boenisch et al., 2022, Lange et al., 29 Jan 2025).
Federated Learning: iDP is integrated into FL protocols by setting per-client sampling probabilities (with fixed per-round noise) such that each client satisfies after rounds (Lange et al., 29 Jan 2025). The privacy accounting leverages moments-based composition and per-group assignments, yielding higher utility by sampling more often from users with relaxed privacy preferences.
Label-Only and Embedding Release: iDP allows deployment of neural classifiers that identify input regions (“iDP-DB”) where deterministic 0-iDP can be certified. For points not certified, noise is selectively injected, preserving nearly full baseline accuracy (Kabaha et al., 23 Feb 2025). Embedding distributions can be protected by iDP-predictable mechanisms such as iDP-SignRP, which randomize only the bits susceptible to privacy leakage under local flips (Li et al., 2023).
4. Privacy-Utility Trade-offs and Empirical Performance
By lowering the required noise, iDP mechanisms empirically outperform standard DP in data release (mean/median queries, masked tabular data), deep classifier deployment, image retrieval, and SVM learning. For instance, iDP-protected microaggregated releases yield signal-to-noise ratios orders of magnitude higher than global-sensitivity-masked DP at equivalent ε (Soria-Comas et al., 2023). In FL and iDP-SGD, using individualized budgets boosts test accuracy by 2–5% over uniform-DP baselines on MNIST, SVHN, FMNIST, and CIFAR-10 (Boenisch et al., 2023, Lange et al., 29 Jan 2025). iDP-SignRP achieves near-private accuracy at , far below the regime required for traditional methods (Li et al., 2023).
| Mechanism | Privacy Calib. | Utility Benefit (ε ≪ 1) | Limitation |
|---|---|---|---|
| Standard DP | Global sensitivity (Δ) | Very high noise, low utility | Overprotects outliers |
| iDP-Laplace | Local sensitivity (LS(D)) | Orders-of-mag. lower noise | Vulnerable to attacks |
| iDP-Microagg. | Cluster LS / CBLS | Similar to DP at ε ∼ 10× lower | Trust in clustering |
| iDP-SignRP | Sparse flip/noise | Full utility at ε < 0.5 | Fixed-data only |
| Adaptive iDP-SGD | Per-participant accounting | Up to 5% accuracy gain | Budget interactions |
5. Composition, Post-Processing, and Advanced Privacy Accounting
iDP mechanisms inherit key properties of standard DP: post-processing invariance (no further function on releases can inflate privacy loss) and sequential composition (privacy losses add over releases to the same ) (Soria-Comas et al., 2016, Wang, 2017, Protivash et al., 2022). Adapted moments and RDP accountants provide exact per-individual privacy tracking even under adaptive composition (e.g., through personalized Rényi filters), which enable conservative yet efficient budget exhaustion checks and facilitate adaptive release protocols (Feldman et al., 2020, Koskela et al., 2022).
6. Limitations, Vulnerabilities, and Mitigation
The principal vulnerability of iDP arises from its “empirical neighbor” model: tuning noise to local sensitivity reveals data-dependent information. Attackers can exploit the detectability of noise magnitude or the structure of masked queries to reconstruct the underlying dataset at negligible privacy cost—“reconstruction attacks” are particularly devastating when the mechanism’s output distribution varies deterministically with local data (Protivash et al., 2022). In federated and centralized machine learning contexts, sampling-based iDP implementations are susceptible to “excess risk” stemming from the coupling of privacy budgets: an individual’s risk depends nontrivially on other users’ privacy choices, and adversaries may deliberately adjust budget assignment to inflate the susceptibility of targets (Kaiser et al., 19 Jan 2026).
Mitigation strategies include:
- Imposing joint contracts that add a Δ-divergence upper bound to cap excess risk (Kaiser et al., 19 Jan 2026).
- “Smooth sensitivity” wrappers to obscure the data-dependence of noise (Soria-Comas et al., 2016, Protivash et al., 2022).
- Mechanism-level auditing, e.g., privacy profiles or -DP trade-off visualizations, to detect deviations from the nominal individual guarantee (Kaiser et al., 19 Jan 2026).
- Restricting the exposure of local sensitivity or noise parameters to outputs.
- Careful system design—e.g., padding batch size, decoupling noise and sampling rates, and grouping budgets—to constrain the collective effects of heterogeneous policies (Lange et al., 29 Jan 2025, Boenisch et al., 2023).
7. Extensions, Future Directions, and Open Questions
Research on iDP is converging along several axes:
- Mechanism generalization: Beyond additive mechanisms, extending iDP to exponential mechanisms, complex queries, and compositional data analysis flows (e.g., per-instance GDP/PLD frameworks) (Koskela et al., 2022, Wang, 2017).
- Feature- and attribute-level iDP: Leveraging partial sensitivity/PLIS for per-feature privacy budgeting or targeted heterogeneous noise (Mueller et al., 2021, Mueller et al., 2022).
- Personalized DP and privacy markets: Allocating risk/reward per individual for data markets and federated analytics (Boenisch et al., 2022, Cummings et al., 2018).
- Robust deployment: Interactive interfaces and policy auditing to align individual understanding of privacy with the technical implications of collective budget selection (Kaiser et al., 19 Jan 2026).
- Security against collusion: Defensive mechanisms and contract design to immunize iDP implementations against adversarial manipulation of peer budgets (Kaiser et al., 19 Jan 2026).
Unresolved open areas include: principled intermediate neighbor sets trading off utility and group privacy, safe release of data-driven noise magnitudes, and full compositional auditing of iDP-based systems in adversarial deployment environments.
References:
- "Conciliating Privacy and Utility in Data Releases via Individual Differential Privacy and Microaggregation" (Soria-Comas et al., 2023)
- "Individual Differential Privacy: A Utility-Preserving Formulation of Differential Privacy Guarantees" (Soria-Comas et al., 2016)
- "Reconstruction Attacks on Aggressive Relaxations of Differential Privacy" (Protivash et al., 2022)
- "Your Privacy Depends on Others: Collusion Vulnerabilities in Individual Differential Privacy" (Kaiser et al., 19 Jan 2026)
- "Per-instance Differential Privacy" (Wang, 2017)
- "Individualized PATE: Differentially Private Machine Learning with Individual Privacy Guarantees" (Boenisch et al., 2022)
- "Individualized Privacy Assignment for DP-SGD" (Boenisch et al., 2023)
- "Federated Learning With Individualized Privacy Through Client Sampling" (Lange et al., 29 Jan 2025)
- "Guarding the Privacy of Label-Only Access to Neural Network Classifiers via iDP Verification" (Kabaha et al., 23 Feb 2025)
- "Differential Privacy with Random Projections and Sign Random Projections" (Li et al., 2023)
- "Individual Sensitivity Preprocessing for Data Privacy" (Cummings et al., 2018)
- "Partial sensitivity analysis in differential privacy" (Mueller et al., 2021)
- "How Do Input Attributes Impact the Privacy Loss in Differential Privacy?" (Mueller et al., 2022)
- "Individual Privacy Accounting via a Renyi Filter" (Feldman et al., 2020)
- "Individual Privacy Accounting with Gaussian Differential Privacy" (Koskela et al., 2022)