Papers
Topics
Authors
Recent
Search
2000 character limit reached

Two-Bit Mutual Information in AIT

Updated 22 November 2025
  • Two-bit mutual information is defined as I(x:y)=C(x)+C(y)-C(x,y), capturing the shared information between finite binary strings as the maximal secret key length.
  • The approach leverages interactive protocols with shared randomness and a Kolmogorov-Slepian–Wolf compressor to enable nearly optimal secret key agreement.
  • Tight lower and upper bounds illustrate that achieving optimal key rates necessitates communication near conditional complexity thresholds, with results precise up to logarithmic terms.

The two-bit operational characterization of mutual information in the context of algorithmic information theory (AIT) assigns a concrete, cryptographically meaningful interpretation to Kolmogorov’s plain mutual information I(x:y)I(x:y) for finite binary strings xx and yy. Up to logarithmic precision, the mutual information I(x:y)=C(x)+C(y)C(x,y)I(x:y) = C(x) + C(y) - C(x,y) corresponds exactly to the maximal length of a shared secret key that two parties, each holding one string and the complexity profile of the pair, can agree upon when interacting over a public channel using probabilistic protocols. This result provides a quantitative bridge between AIT and the notion of privacy amplification as studied in cryptography, demonstrating that mutual information determines protocol limits for secret key agreement in a one-shot, non-stochastic setting (Romashchenko et al., 2017).

1. Algorithmic Mutual Information and Plain Kolmogorov Complexity

Let C(x)C(x) denote the plain Kolmogorov complexity of a binary string xx and C(x,y)C(x,y) the joint complexity of the pair (x,y)(x, y). The algorithmic mutual information is then given by

I(x:y)=C(x)+C(y)C(x,y).I(x:y) = C(x) + C(y) - C(x, y).

This definition satisfies I(x:y)0I(x:y) \ge 0 and, by the Kolmogorov chain rule,

C(x,y)=C(x)+C(yx)±O(logC(x,y)),C(x, y) = C(x) + C(y \mid x) \pm O(\log C(x, y)),

which ensures all precision errors are within O(logn)O(\log n) for x,yn|x|, |y| \le n. The plain and prefix complexity (K()K(\cdot)) formulations are interchangeable up to O(logn)O(\log n) additive slack. This operational view of I(x:y)I(x:y) applies regardless of probability distributions; it characterizes information shared in individual strings (Romashchenko et al., 2017).

2. Communication Model and Complexity Profile

In the proposed model, two parties engage in interactive communication to agree upon a secret key:

  • Inputs: Alice is given (x,h)(x, h), Bob is given (y,h)(y, h), where the complexity profile h=(C(x),C(y),C(x,y))h = (C(x), C(y), C(x, y)) is known to both.
  • Protocol: The protocol proceeds in kk public rounds using a shared random seed rr. In each round, Alice and Bob exchange messages (xi,yix_i, y_i) where each message depends on their respective input string, the shared randomness, and previous transcripts.
  • Secret Key Requirement: Both compute an output zz using their input, rr, and the full transcript t=(x1,y1,...,xk,yk)t=(x_1, y_1, ..., x_k, y_k). The protocol is correct if:

    • (i) With probability 1ϵ1-\epsilon, both outputs match: Pr[A(x,r,t)=B(y,r,t)]=1ϵ\Pr[A(x, r, t) = B(y, r, t)] = 1-\epsilon.
    • (ii) The agreed key zz is almost incompressible given (t,r)(t, r):

    C(zt,r)zδ,C(z \mid t, r) \ge |z| - \delta,

    with deficiency δ=O(logn)\delta = O(\log n). Thus, zz is uniformly random conditioned on the public transcript (Romashchenko et al., 2017).

3. Optimal Secret Key Length: Achievability and Impossibility

The main result consists of tight lower and upper bounds for achievable key length under the above model.

Lower Bound (Achievability)

For every ϵ>0\epsilon > 0, there exists a one-round randomized protocol with public randomness such that, for any inputs x,yx, y of length nn, with probability 1ϵ1-\epsilon, Alice and Bob can agree on a key zz with

zI(x:y)O(lognϵ),|z| \ge I(x:y) - O\left(\log \frac{n}{\epsilon}\right),

and secrecy deficiency O(log(1/ϵ))O(\log(1/\epsilon)). The necessary public communication is

min{C(xy),C(yx)}+O(lognϵ).\min\{C(x \mid y), C(y \mid x)\} + O\left(\log \frac{n}{\epsilon}\right).

The protocol relies on a Kolmogorov-Slepian–Wolf compressor: Alice sends a “fingerprint” of her input compressible to C(xy)C(x \mid y) bits, enabling Bob to reconstruct xx. Both parties then apply an extractor-based procedure to derive zz of length near I(x:y)I(x:y). The output zz is, up to logarithmic slack, random conditioned on the public transcript and random seed (Romashchenko et al., 2017).

Upper Bound (Impossibility)

No kk-round protocol with public randomness and error ϵ\epsilon, for any fixed kk, can succeed with z>I(x:y)+δ(n)+O(lognϵ)|z| > I(x:y) + \delta(n) + O\left(\log \frac{n}{\epsilon}\right), where δ(n)\delta(n) is the secrecy deficiency. This follows from an “information non-increase” argument: combinatorial-rectangle lemmas show that

C(zt)I(x,r:y,rt)I(x:y)+O(logn).C(z \mid t) \le I(x, r : y, r \mid t) \le I(x: y) + O(\log n).

Thus, I(x:y)I(x:y) is a sharp threshold for the length of the shared secret (Romashchenko et al., 2017).

4. Communication Complexity Thresholds

If the protocol communicates fewer than (1δ1)min{C(xy),C(yx)}(1-\delta_1)\min \{C(x \mid y), C(y \mid x)\} bits for any constants δ1,δ2>0\delta_1, \delta_2 > 0, then for all inputs and any nontrivial success probability ϵ>0\epsilon > 0, the maximum secret key length

C(zt,r)>δ2I(x:y)C(z \mid t, r) > \delta_2 I(x:y)

cannot be achieved. This statement is established using deep AIT lower bounds (Muchnik, Razenshteyn) on extractable common information. Therefore, communication below the Slepian–Wolf threshold forces the resulting key to be vanishingly short (Romashchenko et al., 2017).

5. Role of the Complexity Profile

The complexity profile for a pair (x,y)(x, y) is

(C(x),C(y),C(x,y)),(C(x), C(y), C(x, y)),

or, equivalently, the tuple (C(xV))V{1,2},V(C(x_V))_{V \subseteq \{1,2\}, V \ne \emptyset}. Knowledge of the profile is essential for protocol optimality: it permits each party to determine suitable compression/fingerprint lengths, ensuring that minimal communication suffices for input reconstruction and maximal key agreement. Without the profile, protocol parameters cannot be optimally chosen (Romashchenko et al., 2017).

6. Precision, Limitations, and Operational Significance

All foregoing bounds hold up to additive O(logn)O(\log n) terms, or O(log(n/ϵ))O(\log(n/\epsilon)) when tracking statistical error probability ϵ\epsilon. The achievable protocol is computable but not generally time-efficient, requiring brute-force decodings; in special cases (e.g., strings with bounded Hamming distance), polynomial time is attainable using error-correcting codes in place of Kolmogorov compressors. The upper bound on key length applies only to protocols with public randomness; the private-randomness scenario remains unresolved. Conclusively, the result provides an operational meaning to algorithmic mutual information: up to logarithmic precision,

I(x:y)=C(x)+C(y)C(x,y)I(x:y) = C(x) + C(y) - C(x, y)

characterizes the secret key rate for public-channel agreement protocols. This operational and cryptographic perspective on AIT unifies two fundamental notions of information and privacy in a protocol-independent, non-stochastic framework (Romashchenko et al., 2017).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Two-Bit Mutual Information.