Papers
Topics
Authors
Recent
Search
2000 character limit reached

Unbiased Coin-Flipping Estimators

Updated 25 December 2025
  • The paper leverages first-passage properties in biased random walks to construct unbiased coin-only estimators for π with dramatically improved variance and sample complexity.
  • It employs closed-form integral representations and combinatorial analysis to derive exact expectation and variance formulas under various coin biases.
  • The methodology significantly outperforms classical approaches by achieving exponential convergence rates and reducing computational cost through coin-flip processes.

Unbiased coin-flipping estimators are algorithmic constructs in which random coin flips—often of biased coins—are used to construct unbiased statistical estimators for quantities of analytic, geometric, or probabilistic significance. Recent advances exploit first-passage properties in biased random walks to yield highly efficient, coin-only estimators for transcendental constants such as π, leveraging explicit probabilistic and combinatorial analysis of the stopping times and win frequencies of such walks. This approach offers dramatic improvements in variance and sample complexity compared to classical methods, and admits exact closed-form integral and finite-sum solutions for both the expectation and variance of the underlying estimators (Bruss et al., 24 Dec 2025).

1. Probabilistic Model: First-Passage Time in Biased Simple Random Walks

Let {Xn:n1}\{X_n : n \geq 1\} be i.i.d. random variables with P[Xn=+1]=pP[X_n=+1]=p, P[Xn=1]=q=1pP[X_n=-1]=q=1-p, with p[1/2,1)p \in[1/2,1). The simple random walk SnS_n is given by

S0=0,Sn=k=1nXk,n1.S_0 = 0,\quad S_n = \sum_{k=1}^n X_k,\quad n \geq 1.

For an integer d1d \geq 1, the first-passage time to dd is

Nd=inf{n0:Sn=d}.N_d = \inf\{n \geq 0 : S_n = d\}.

At stopping, the number of +1+1 steps (“wins”) is P[Xn=+1]=pP[X_n=+1]=p0, and the normalized win rate is P[Xn=+1]=pP[X_n=+1]=p1.

2. Exact Integral Formulas for Expectation and Variance

The core relation

P[Xn=+1]=pP[X_n=+1]=p2

yields that

P[Xn=+1]=pP[X_n=+1]=p3

Closed-form integral representations are available via the identity P[Xn=+1]=pP[X_n=+1]=p4 and leveraging the generating function for the first-passage time. For P[Xn=+1]=pP[X_n=+1]=p5 and P[Xn=+1]=pP[X_n=+1]=p6,

P[Xn=+1]=pP[X_n=+1]=p7

P[Xn=+1]=pP[X_n=+1]=p8

Thus,

P[Xn=+1]=pP[X_n=+1]=p9

P[Xn=1]=q=1pP[X_n=-1]=q=1-p0

These formulas (Theorem 1) are valid for all P[Xn=1]=q=1pP[X_n=-1]=q=1-p1 and P[Xn=1]=q=1pP[X_n=-1]=q=1-p2 (Bruss et al., 24 Dec 2025).

3. Construction of Unbiased Estimators for P[Xn=1]=q=1pP[X_n=-1]=q=1-p3

For P[Xn=1]=q=1pP[X_n=-1]=q=1-p4 (fair coin), the expectation reduces to

P[Xn=1]=q=1pP[X_n=-1]=q=1-p5

with

  • P[Xn=1]=q=1pP[X_n=-1]=q=1-p6: P[Xn=1]=q=1pP[X_n=-1]=q=1-p7
  • P[Xn=1]=q=1pP[X_n=-1]=q=1-p8: P[Xn=1]=q=1pP[X_n=-1]=q=1-p9
  • p[1/2,1)p \in[1/2,1)0: p[1/2,1)p \in[1/2,1)1, etc. For all odd p[1/2,1)p \in[1/2,1)2, closed forms employing arctangent are available:

p[1/2,1)p \in[1/2,1)3

Setting p[1/2,1)p \in[1/2,1)4 and rearranging yields an unbiased estimator of p[1/2,1)p \in[1/2,1)5:

p[1/2,1)p \in[1/2,1)6

with p[1/2,1)p \in[1/2,1)7.

Variance follows as

p[1/2,1)p \in[1/2,1)8

For p[1/2,1)p \in[1/2,1)9, similar formulas apply. A key optimization is to use a bias SnS_n0 (i.e. SnS_n1), so that

SnS_n2

leading to a corresponding family of estimators (Theorem 3 of (Bruss et al., 24 Dec 2025))

SnS_n3

with SnS_n4.

4. Monotonicity, Parameter Regimes, and Variance-Cost Trade-off

Expectation SnS_n5 is strictly decreasing in SnS_n6 for fixed SnS_n7, and strictly increasing in SnS_n8 for fixed SnS_n9; as S0=0,Sn=k=1nXk,n1.S_0 = 0,\quad S_n = \sum_{k=1}^n X_k,\quad n \geq 1.0, S0=0,Sn=k=1nXk,n1.S_0 = 0,\quad S_n = \sum_{k=1}^n X_k,\quad n \geq 1.1 and variance S0=0,Sn=k=1nXk,n1.S_0 = 0,\quad S_n = \sum_{k=1}^n X_k,\quad n \geq 1.2. Higher S0=0,Sn=k=1nXk,n1.S_0 = 0,\quad S_n = \sum_{k=1}^n X_k,\quad n \geq 1.3 yields smaller variance at the cost of increased sample complexity.

For S0=0,Sn=k=1nXk,n1.S_0 = 0,\quad S_n = \sum_{k=1}^n X_k,\quad n \geq 1.4, the expected number of coin flips is

S0=0,Sn=k=1nXk,n1.S_0 = 0,\quad S_n = \sum_{k=1}^n X_k,\quad n \geq 1.5

significantly less than for S0=0,Sn=k=1nXk,n1.S_0 = 0,\quad S_n = \sum_{k=1}^n X_k,\quad n \geq 1.6, for which S0=0,Sn=k=1nXk,n1.S_0 = 0,\quad S_n = \sum_{k=1}^n X_k,\quad n \geq 1.7 due to heavy tails. The variance of S0=0,Sn=k=1nXk,n1.S_0 = 0,\quad S_n = \sum_{k=1}^n X_k,\quad n \geq 1.8 decays exponentially in S0=0,Sn=k=1nXk,n1.S_0 = 0,\quad S_n = \sum_{k=1}^n X_k,\quad n \geq 1.9, implying the sample size for achieving MSE d1d \geq 10 grows as d1d \geq 11, in contrast to the d1d \geq 12 requirement for i.i.d. averaging (Bruss et al., 24 Dec 2025).

To minimize computational cost for a selected variance, d1d \geq 13 should be as close to d1d \geq 14 as possible, subject to d1d \geq 15 rationally related to d1d \geq 16 to maintain an exact inversion for unbiasedness.

5. Comparison to Classical Coin-Based Estimation

Classical unbiased estimators of d1d \geq 17, such as Buffon's needle and mechanical devices, require real-valued function evaluations and do not exploit the strong algebraic structure of random walk first-passage times, resulting in much slower convergence. The coin-only estimators derived from first-passage win rates achieve exponentially accelerated convergence and reduced mean sample complexity, offering a significant improvement in purely combinatorial and coin-flip-based estimation of transcendental numbers (Bruss et al., 24 Dec 2025).

6. Multi-Party Unbiased Coin-Flipping Protocols

In cryptographic and distributed computing contexts, fair multi-party coin-flipping protocols aim to output unbiased bits even under adversarial conditions. The protocol family constructed in (Buchbinder et al., 2021) achieves an d1d \geq 18 bias for d1d \geq 19 parties and dd0 rounds, generalizing previous two- and three-party fair coin-flipping results. Protocols use secret sharing, defense shares, and resilient "defense-quality" functions to amplify fairness under adversarial aborts and deviations, exploiting online binomial game reductions and linear programming duality to tightly bound the achievable bias.

A plausible implication is that the algorithmic techniques for multi-party coin-flipping bias analysis, especially those using binomial processes and robust handling of adaptive adversarial leakage, bear methodological similarities to the probabilistic analysis of first-passage estimators—although the direct construction of analytic estimators for transcendental constants is a distinct problem area. Both domains illustrate advanced exploitation of underlying coin-flip process structure for statistical efficiency or cryptographic fairness (Buchbinder et al., 2021).

7. Summary Table: Key Estimator Features

Estimator Coin Bias dd1 Expectation Formula Variance Rate Mean Flips
dd2 dd3 dd4 for dd5, general closed form for odd dd6 dd7 dd8
dd9 Nd=inf{n0:Sn=d}.N_d = \inf\{n \geq 0 : S_n = d\}.0 Explicit sum and arctan Nd=inf{n0:Sn=d}.N_d = \inf\{n \geq 0 : S_n = d\}.1 Nd=inf{n0:Sn=d}.N_d = \inf\{n \geq 0 : S_n = d\}.2 Nd=inf{n0:Sn=d}.N_d = \inf\{n \geq 0 : S_n = d\}.3

The key advantages of unbiased coin-flipping estimators derived from first-passage random walk statistics are exact analytical tractability, dramatic variance minimization through bias selection, and highly efficient convergence to Nd=inf{n0:Sn=d}.N_d = \inf\{n \geq 0 : S_n = d\}.4 or related constants, setting a new standard for combinatorial random estimation (Bruss et al., 24 Dec 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Unbiased Coin-Flipping Estimators.