Papers
Topics
Authors
Recent
Search
2000 character limit reached

Learning-Augmented PSMM Protocol

Updated 27 January 2026
  • The paper presents the Learning-Augmented PSMM protocol that integrates information-theoretic privacy via polynomial masking with computational efficiency gained from low-rank tensor decompositions.
  • It guarantees perfect privacy under a semi-honest model with up to t-1 colluding agents and achieves optimal recovery thresholds through careful algebraic encoding and polynomial interpolation.
  • By leveraging learned tensor decompositions, the protocol significantly reduces local computation costs—demonstrating up to an 80% speedup for large matrices—while maintaining rigorous security guarantees.

Learning-Augmented PSMM (Perfectly Secure Collaborative Matrix Multiplication) is a protocol designed for secure multiparty computation (MPC) of matrix products, specifically ABA^\top B over finite fields $\F$, under strict storage and privacy constraints. The core innovation integrates information-theoretic secrecy based on polynomial-masking techniques with computational speedup through learning-based, low-rank tensor decompositions, yielding substantial improvements in local computation while maintaining perfect security guarantees (He et al., 14 Jan 2026).

1. Problem Framework and Security Model

Given NN semi-honest agents, connected to a controller and able to store at most a $1/k$ fraction of each input matrix (i.e., one m×(m/k)m \times (m/k) block of AA and BB), the protocol considers the scenario where at most t1t-1 agents may collude. The objective is for a controller to compute ABA^\top B exactly, ensuring:

  • Information-theoretic privacy: Any coalition of up to t1t-1 agents obtains no information about the inputs.
  • Local storage constraint: Each agent holds exactly one block of each matrix plus masking randomness.
  • Optimal recovery threshold: The number of agents NN achieves matching lower bounds for polynomial-sharing-based secure matrix multiplication.

This setting adheres to the standard security definitions in MPC and coded computing, with explicit attention to storage and collusion bounds, and assumes a trusted source and private authenticated channels (He et al., 14 Jan 2026).

2. Algebraic Structure: Polynomial Masking and Coefficient Alignment

Input matrices $A, B \in \F^{m \times m}$ are partitioned into kk column blocks:

$A = [A_1~\cdots~A_k],~~B = [B_1~\cdots~B_k],~~A_i, B_j \in \F^{m \times (m/k)}.$

Each block is encoded as a sparse masking polynomial; for example: \begin{align*} g_A(x) &= \sum_{i=1}k A_i x{i-1} + \sum_{\ell=1}{t-1} R{(A)}_\ell x{k2+\ell-1}, \ g_B(x) &= \sum_{j=1}k B_j x{k(j-1)} + \sum_{\ell=1}{t-1} R{(B)}_\ell x{k2+\ell-1}. \end{align*} The “signal support” term contains input blocks; the “masking tail” terms (R(A)R^{(A)}_\ell, R(B)R^{(B)}_\ell drawn uniformly and independently over $\F^{m \times (m/k)}$) ensure information-theoretic security.

For each publicly chosen $\alpha_n \in \F$, agent nn receives gA(αn),gB(αn)g_A(\alpha_n), g_B(\alpha_n). Local computation yields

M(αn)=gA(αn)gB(αn),M(\alpha_n) = g_A(\alpha_n)^\top g_B(\alpha_n),

which, as a polynomial in xx, decomposes into coefficients MνM_\nu such that the indices {i1+k(j1)}\{i-1 + k(j-1)\} directly recover the k2k^2 products AiBjA_i^\top B_j, while all other coefficients are linear combinations involving random masks.

The scheme is reminiscent of Beaver-style MPC but is realized via algebraic encoding: the masking tail effectively plays the role of classical Beaver triples, ensuring that any set of t1t-1 evaluations (i.e., views of colluding agents) reveals zero information about the signals, as formalized by the polynomial masking lemma (He et al., 14 Jan 2026).

3. Recovery Thresholds and Information-Theoretic Privacy

The number of nonzero coefficients in M(x)M(x) determines the minimal number of agents required for recovery:

N(k,t)=min{2k2+2t3, k2+kt+t2}.N^\star(k,t) = \min\{2k^2+2t-3,~k^2+kt+t-2\}.

By assigning NN(k,t)N \ge N^\star(k,t) agents and choosing αn\alpha_n randomly, the protocol constructs a block-Vandermonde interpolation system of full rank with high probability. This approach guarantees both perfect privacy against up to t1t-1 colluders and optimal recovery, matching known information-theoretic lower bounds for polynomial-sharing protocols, under the given storage and privacy parameters (He et al., 14 Jan 2026).

4. Learning-Augmented Protocol: Tensorization and Low-Rank Methods

The learning-augmented extension—LA-PSMM—replaces each agent’s dense multiplication by a lower-rank, learned tensor decomposition. Conventional local operations require O((m/k)3)\mathcal{O}((m/k)^3) finite-field operations, which becomes prohibitive for large mm. Instead, matrices are multiplied in bilinear tensorized form:

vec(C)=r=1Tur,vec(A)vr,vec(B)wr,\operatorname{vec}(C) = \sum_{r=1}^T \langle u_r,\operatorname{vec}(A) \rangle \langle v_r, \operatorname{vec}(B) \rangle w_r,

where TT is the rank of the decomposition; for Strassen’s method, T=7T=7 for 2×22\times2 matrix multiplication, while learned decompositions (e.g., via AlphaTensor) achieve larger Tl(m/k)3T_l \ll (m/k)^3, enabling scalable reductions in local computation.

The local step thus becomes:

$\hat{M}(\alpha_n) = \sum_{r=1}^{T_l} \langle u_r, \operatorname{vec}(g_A(\alpha_n)) \rangle \langle v_r, \operatorname{vec}(g_B(\alpha_n)) \rangle \mat(w_r),$

with learned (ur,vr,wr)(u_r, v_r, w_r) from tensor decomposition, and TlT_l the learned rank (He et al., 14 Jan 2026).

Operator-invariance: If the local bilinear mapping is exactly equivalent to gA(αn)gB(αn)g_A(\alpha_n)^\top g_B(\alpha_n) for all αn\alpha_n, it preserves the distribution of signal and masking coefficients in M(x)M(x), thus does not compromise privacy or the recovery threshold.

5. Protocol Workflow and Computational Complexity

Protocol Workflow

The LA-PSMM protocol consists of the following steps:

Step Actor Operation
Partition Source plant Split A,BA,B into kk blocks
Masking Source plant Sample R(A),R(B)R^{(A)}_\ell, R^{(B)}_\ell
Polynomial encoding Source plant Form gA(x),gB(x)g_A(x), g_B(x)
Point selection Source plant Choose α1,,αN\alpha_1,\ldots,\alpha_N
Share distribution Source plant Send gA(αn),gB(αn)g_A(\alpha_n), g_B(\alpha_n)
Local multiplication Agent nn Compute M^(αn)\hat M(\alpha_n) via learned expansion
Upload results Agent nn Send M^(αn)\hat M(\alpha_n) to controller
Interpolation Controller Reconstruct AiBjA_i^\top B_j from block coefficients

Complexity

  • PSMM: Encoding involves NO((k+t)m2/k)N\cdot\mathcal{O}((k + t)m^2/k) operations; local multiply scales as NO(m3/k2)N\cdot\mathcal{O}(m^3/k^2) per agent; decoding uses O((m/k)2N2)\mathcal{O}((m/k)^2 N^2) (naive) or near-linear (fast) methods.
  • LA-PSMM: Encoding/decoding overheads are unchanged. Local multiplication reduces to NO(Tlm2/k)N\cdot\mathcal{O}(T_l m^2 / k), provided Tl(m/k)3T_l \ll (m/k)^3.

Speedup is observed when Tlm/kT_l \ll m/k, with empirical results reaching up to 80%80\% local speedup for m=4096m=4096, and speedup scaling approximately linearly with mm (e.g., a 5×5 \times reduction for m=8192m=8192). This suggests LA-PSMM achieves substantial gains for large matrix dimensions and moderate partition factors (He et al., 14 Jan 2026).

6. Security Analysis and Theoretical Guarantees

The privacy of LA-PSMM is founded on:

  • The masking lemma, ensuring that any t1t-1 polynomial evaluations are statistically independent of A,BA,B, thus any subset of t1t-1 colluding agents observes fully random, independent shares.
  • Operator-invariance, as learned bilinear expansions in LA-PSMM are constructed to be exactly equivalent to the standard multiplication for all field inputs, ensuring that the polynomial masking structure and critical “signal” exponents (those from which AiBjA_i^\top B_j are recovered) remain unchanged.
  • The recovery threshold remains NN(k,t)N \geq N^\star(k, t), with the Vandermonde interpolation problem remaining full-rank due to the polynomial structure and random block selection.

The LA-PSMM protocol thus inherits the perfect privacy, optimal recovery, and correctness guarantees of the original PSMM, while introducing no new vulnerabilities (He et al., 14 Jan 2026).

7. Empirical Evaluation and Implications

Experimental benchmarks, conducted over $\F_p$ for large prime pp and square matrices of size 512m8192512 \le m \le 8192, demonstrate:

  • For k=8k = 8, t=4t = 4, and learned rank Tl0.1(m/k)T_l \approx 0.1(m/k) (using AlphaTensor-style reinforcement learning), LA-PSMM local computation times are reduced to 20%\sim20\% of conventional PSMM for m=4096m = 4096.
  • Speedup, defined as PSMM time divided by LA-PSMM time, increases almost linearly with mm, reaching approximately 5×5\times for m=8192m=8192.
  • Wall-clock times (excluding network latency) confirm the scalability of LA-PSMM’s local computation cost advantage for increasing matrix dimensions.

A plausible implication is that the practical cost of perfect secrecy can now be substantially reduced in large-scale collaborative or distributed settings, provided suitable low-rank bilinear decompositions are available and exact (He et al., 14 Jan 2026).


Learning-augmented PSMM synthesizes block-masked, information-theoretically secure MPC protocols with advances in learning-based tensor decompositions. This union enables scalable, perfectly secure collaborative matrix multiplication under strong adversarial models, with computational efficiency improvements that scale with problem size, offering a highly practical primitive for coded computing with robust privacy-preserving guarantees (He et al., 14 Jan 2026).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Learning-Augmented PSMM.