Papers
Topics
Authors
Recent
Search
2000 character limit reached

Privacy-PSA-AoDS Profiles

Updated 14 January 2026
  • Privacy-PSA-AoDS profiles are comprehensive frameworks that integrate differential privacy, PSA protocols, and compositional privacy accounting for distributed data analysis.
  • They utilize privacy profiles and f-DP with hockey-stick divergences to achieve tighter privacy-utility tradeoffs and rigorous privacy accounting.
  • Applications include secure location-based services, federated learning, and AI profiling, demonstrating scalable protocols for end-to-end data protection.

A Privacy-PSA-AoDS profile is a formalization of privacy-preserving data analysis and user modeling, unifying cryptographic, statistical, and algorithmic frameworks for profiling and analytics-on-demand services across distributed environments. This paradigm integrates rigorous definitions from differential privacy (DP), the Private Stream Aggregation (PSA) protocol family, compositional privacy accounting, and advanced attacks/models, while emphasizing both the semantics of privacy loss (ε, δ) and the implementation of cryptographically secure aggregation and profile sharing. The AoDS (Analytics-on-Demand Services) extension generalizes these concepts to diverse domains including time-series, federated learning, location-based services, and joint analytics across organizations.

1. Mathematical Foundations: Privacy Profiles, f-DP, and Divergences

Central to Privacy-PSA-AoDS is the use of privacy profiles: mappings δM(ε)\delta_{\mathcal{M}}(\varepsilon) describing at each ε\varepsilon the minimum δ\delta for which a mechanism M\mathcal{M} is (ε,δ)(\varepsilon,\delta)-DP. Rather than specifying privacy as a single pair (ε,δ)(\varepsilon, \delta), one tracks the full privacy profile: δM(ε):=maxXXsupEO{Pr[M(X)E]eεPr[M(X)E]}.\delta_{\mathcal{M}}(\varepsilon) := \max_{X \sim X'} \sup_{E \subseteq \mathcal{O}} \left\{ \Pr[\mathcal{M}(X) \in E] - e^{\varepsilon} \Pr[\mathcal{M}(X') \in E] \right\}. This is equivalent to bounding the hockey-stick divergence Heε(PQ)=(p(t)eεq(t))+dtH_{e^{\varepsilon}}(P\|Q) = \int (p(t) - e^{\varepsilon} q(t))_{+} \, dt over all neighboring datasets. The framework further generalizes to ff-DP, where an ff-divergence Df(PQ)D_f(P\|Q) parameterizes tradeoffs in adversary distinguishability between distributions PP and QQ via a convex function f()f(\cdot) (Koskela et al., 2024).

Key implications:

  • Exact privacy profiles often yield tighter privacy-utility tradeoffs compared to Rényi DP composition, especially when post-processing or randomization is involved.
  • Composition and postprocessing properties are governed by convexity and joint convexity of ff.

2. Cryptographic Aggregation Protocols: PSA and Computational Differential Privacy

The PSA protocol, introduced in Shi et al. and refined in Valovich & Aldà (Valovich et al., 2017), is a cornerstone for distributed, privacy-preserving aggregation. Each user encrypts their local contribution under a key-homomorphic weak PRF, and the aggregator learns only the (optionally perturbed) sum over all users: PSADecs0(t,c1,,cn)=φ1(Fs0(t)i=1nci)=i=1nxi.\textsf{PSADec}_{s_0}(t, c_1, \ldots, c_n ) = \varphi^{-1}\left( F_{s_0}(t) \prod_{i=1}^{n} c_i \right) = \sum_{i=1}^{n} x_i. Security is modeled by Aggregator Obliviousness (AO2): the aggregator, given up to (1γ)n(1-\gamma)n user keys, cannot distinguish encryptions with the same aggregate on any uncompromised subset.

Statistical (ε,δ)(\varepsilon,\delta)-DP of the noise-addition mechanism lifts to computational DP for the PSA protocol, under the assumption of PRF security:

  • Any efficient adversary distinguishing protocol outputs with advantage better than (eε,δ)(e^{\varepsilon}, \delta) enables breaking either the base DP mechanism or the cryptographic scheme (Valovich et al., 2017).

A post-quantum instantiation arises from Skellam-LWE, where errors are sampled from the symmetric Skellam distribution, preserving both computational security and distributed DP noise (Valovich et al., 2017).

3. Privacy Profiles for Private Selection and Compositional Accounting

Private selection mechanisms (e.g., ReportNoisyMax, PrivateTuning) are analyzed using privacy profiles and the hockey-stick formulation, yielding sharp privacy bounds that directly track how composition and randomization affect the (ε,δ)(\varepsilon, \delta) budget. The formal compositional recipe (Koskela et al., 2024) states:

Given a base mechanism QQ and a random number of repetitions KK, the composed mechanism AA has privacy profile bounded by: Df(A(X)A(X))yYf(Q(y)φ(qy)Q(y)φ(qy))Q(y)φ(qy),D_f(A(X) \| A(X')) \leq \sum_{y \in \mathcal{Y}} f\left( \frac{Q(y) \varphi'(q_y)}{Q'(y) \varphi'(q'_y)} \right) Q'(y) \varphi'(q'_y), with ff convex and φ(z)=E[zK]\varphi(z) = \mathbb{E}[z^{K}], φ(z)=E[KzK1]\varphi'(z) = \mathbb{E}[Kz^{K-1}] the PGF and derivative for KK.

This framework supports advanced privacy accounting for randomized selection, outperforming RDP-based conversions:

  • Binomial/negative-binomial choices for KK yield sharply reduced divergence relative to Poisson or geometric, reflecting tighter concentration of the number of trials.
  • Experiments in DP-SGD model selection and Generalized PTR show up to 20% reductions in effective ε\varepsilon for fixed δ\delta, with corresponding improvements in utility.

4. Distributed, Aggregated, and Application-Specific Privacy Frameworks

Privacy-PSA-AoDS profiles support secure analytics across diverse domains:

  • Location-centric profiles (LCPs) in geosocial networks utilize homomorphic encryption, threshold secret sharing, and zero-knowledge proofs to guarantee kk-privacy: until kk users have contributed, venues and providers cannot infer individual data better than random chance. This model generalizes to aggregate other distributed systems (smart meters, fitness data) (Carbunar et al., 2013).
  • Gist aggregation in Bilogrevic et al. (Bilogrevic et al., 2014) enables encrypted, differentially private submission of aggregate statistics, protecting user detail while supporting economic incentives for both users and aggregators. The protocol is scalable, achieves near-optimal RMSE in profile estimation, and allows modular integration of pricing and dissemination.
  • Dual-ring protection (Ullah et al., 16 Jun 2025) layers local DP perturbation and entropy-based profile evaporation with PIR-based service retrieval, preserving both profile and query privacy in personalized recommendation and online advertising.
  • Private Set Alignment (PSA) protocols for collaborative analytics (Wang et al., 2024): two parties can compute an inner join over customer IDs, learning either (i) only the joined data (privacy level 1) or (ii) only the join size (privacy level 2), via oblivious PRFs and switching networks with efficient OT-based implementations.

5. Invariant-Preserving Mechanisms and Domain-Specific DP Specifications

Large-scale statistical disclosure control, such as in the US Decennial Census, employs privacy-PSA-AoDS methodologies via invariant-preserving algorithms:

  • The Permutation Swapping Algorithm (PSA) is shown to satisfy pure ε\varepsilon-DP subject to the invariants it leaves unaltered—row-margins in the released contingency tables (Bailie et al., 14 Jan 2025).
  • The mathematical privacy guarantee is

Pr[TPSA(x)E]exp(εDdr(x,x))Pr[TPSA(x)E]\Pr[T_{\mathrm{PSA}}(x) \in E] \leq \exp(\varepsilon_{D} \cdot d_{r}(x, x')) \Pr[T_{\mathrm{PSA}}(x') \in E]

with εD\varepsilon_{D} a function of swap rate pp and stratum size bb.

  • Compared with TopDown Algorithm (TDA) using zCDP, PSA achieves lower nominal privacy loss for the same swap rate, but with more invariants (noiseless margins) potentially increasing information leakage; the theoretical framework formalizes when and how these invariants influence overall privacy risk.

6. Value-Action Alignment and Multi-dimensional Privacy Profiles

In the context of LLMs and AI systems, Privacy-PSA-AoDS is deployed to measure and analyze complex, multi-dimensional privacy profiles whereby:

  • Privacy Attitudes, Prosocialness Scale for Adults (PSA), and Acceptance of Data Sharing (AoDS) are jointly elicited to form a model's profile (Chen et al., 7 Jan 2026).
  • Multi-group SEM identifies the directional effects: Privacy Concern typically predicts less data sharing; Prosocialness predicts more.
  • The Value-Action Alignment Rate (VAAR) quantitatively measures adherence to human-referenced value-action expectations, with lower VAAR indicating closer alignment.
  • Empirical evaluations show heterogeneity across LLMs: some maintain strong value-action coherence (e.g., GPT-4o, Llama3-70B), while others are misaligned, highlighting model-specificity and the necessity for explicit auditing and tuning.

7. Synthesis: Key Theorems, Trade-offs, and Utility-Privacy Balance

Across variants of Privacy-PSA-AoDS profiles, the following principles emerge:

  • Differential privacy is often implemented locally by users (geometric, Laplace, or Skellam noise), then secured by cryptographic protocols (homomorphic encryption, secret sharing, or OT-based aggregation).
  • Privacy profiles and ff-DP/hockey-stick divergences enable tight tracking of overall loss, outperforming standard RDP-to-DP conversions in composed and randomized-selection settings.
  • Aggregation protocols, including post-quantum (LWE-based) variants, provide end-to-end privacy against computationally bounded adversaries, including resistance to side-channel attacks through non-adaptive and adaptive security definitions.
  • Domain-specific mechanisms (Census PSA, location LCPs, profile "gist" sharing) can formalize which invariants are preserved, tuning the privacy-utility tradeoff for practical high-dimensional analytics.
  • Application to joint analytics and multi-party computation leverages PSA/PSI/OT techniques to enable data sharing, intersection, and conversion-rate estimation on large-scale real-world datasets with minimal leakage.

In summary, the Privacy-PSA-AoDS framework encapsulates a comprehensive, technically rigorous architecture for privacy-preserving profiling and data analytics, unified across statistical, cryptographic, and compositional domains. It defines both the mathematical objects (privacy profiles, invariants, and divergences) and the concrete protocols (PSA, DP-perturbed cryptoaggregation, PIR-based retrieval) that collectively support the privacy-utility frontier in distributed, federated, and application-specific settings (Valovich et al., 2017, Koskela et al., 2024, Bailie et al., 14 Jan 2025, Bilogrevic et al., 2014, Carbunar et al., 2013, Ullah et al., 16 Jun 2025, Wang et al., 2024, Du et al., 26 Jun 2025, Chen et al., 7 Jan 2026).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Privacy-PSA-AoDS Profiles.