Efficient Random Sampling -- Parallel, Vectorized, Cache-Efficient, and Online
Abstract: We consider the problem of sampling $n$ numbers from the range ${1,\ldots,N}$ without replacement on modern architectures. The main result is a simple divide-and-conquer scheme that makes sequential algorithms more cache efficient and leads to a parallel algorithm running in expected time $\mathcal{O}(n/p+\log p)$ on $p$ processors, i.e., scales to massively parallel machines even for moderate values of $n$. The amount of communication between the processors is very small (at most $\mathcal{O}(\log p)$) and independent of the sample size. We also discuss modifications needed for load balancing, online sampling, sampling with replacement, Bernoulli sampling, and vectorization on SIMD units or GPUs.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.