Papers
Topics
Authors
Recent
Search
2000 character limit reached

Near-Optimal Averaging Samplers and Matrix Samplers

Published 16 Nov 2024 in cs.CC and cs.DS | (2411.10870v1)

Abstract: We present the first efficient averaging sampler that achieves asymptotically optimal randomness complexity and near-optimal sample complexity. For any $\delta < \varepsilon$ and any constant $\alpha > 0$, our sampler uses $m + O(\log (1 / \delta))$ random bits to output $t = O((\frac{1}{\varepsilon2} \log \frac{1}{\delta}){1 + \alpha})$ samples $Z_1, \dots, Z_t \in {0, 1}m$ such that for any function $f: {0, 1}m \to [0, 1]$, [ \Pr\left[\left|\frac{1}{t}\sum_{i=1}t f(Z_i) - \mathbb{E}[f]\right| \leq \varepsilon\right] \geq 1 - \delta. ] The randomness complexity is optimal up to a constant factor, and the sample complexity is optimal up to the $O((\frac{1}{\varepsilon2} \log \frac{1}{\delta}){\alpha})$ factor. Our technique generalizes to matrix samplers. A matrix sampler is defined similarly, except that $f: {0, 1}m \to \mathbb{C}{d \times d}$ and the absolute value is replaced by the spectral norm. Our matrix sampler achieves randomness complexity $m + \tilde O (\log(d / \delta))$ and sample complexity $ O((\frac{1}{\varepsilon2} \log \frac{d}{\delta}){1 + \alpha})$ for any constant $\alpha > 0$, both near-optimal with only a logarithmic factor in randomness complexity and an additional $\alpha$ exponent on the sample complexity. We use known connections with randomness extractors and list-decodable codes to give applications to these objects. Specifically, we give the first extractor construction with optimal seed length up to an arbitrarily small constant factor above 1, when the min-entropy $k = \beta n$ for a large enough constant $\beta < 1$.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.