Probability Redistribution Pruning Method
- The paper introduces the probability redistribution pruning method to optimize lattice enumeration by systematically allocating pruning radii to maintain a target success probability.
- It employs O(n²) algorithms to compute success probability and enumeration cost using truncated simplex volume integrations and cylinder-intersection estimates.
- The approach uses spline and modifying-constant interpolation to generate near-optimal pruning curves for practical lattice reduction and BKZ applications.
Probability redistribution pruning, as introduced in the context of lattice enumeration, denotes a family of techniques for optimizing the sequence of pruning coefficients in lattice vector enumeration algorithms. Its motivation is to minimize the expected computational cost while maintaining a specified probability of success in recovering the shortest lattice vector. The central framework is Gama–Nguyen–Regev's Extreme Pruning, which systematically adjusts the bounds on projected lengths at each stage of the search tree, probabilistically allocating the "pruning budget" across enumeration levels to achieve near-optimal performance (Aono, 2014).
1. Lattice Enumeration and Pruning Coefficients
Lattice enumeration for the Shortest Vector Problem (SVP) explores a search tree whose nodes correspond to partial coefficient vectors . Each partial sum at depth is pruned if its projected Euclidean length exceeds a bound. Probability redistribution pruning utilizes a sequence of non-decreasing pruning coefficients with . A node is pruned at level if , where is the search radius, typically set to the Gaussian heuristic estimate for the shortest vector length. Choices of reduce search cost but introduce a failure probability.
2. Probabilistic Analysis of Pruning: Success Probability and Cost
The design of pruning coefficients is grounded in two analytic quantities:
- Success Probability: Denoted , it is the probability that the shortest vector survives all pruning tests,
Under the heuristic that the shortest vector's direction is uniformly random on the -sphere, this probability equals the measure of the "cylinder-intersection" within .
- Enumeration Cost: The (expected) number of nodes visited is estimated as
where are -dimensional cylinder-intersection volumes, and is the Gram–Schmidt orthogonalization of the basis.
3. Fast Success-Probability and Enumeration-Cost Computation
Section 3.3 of (Aono, 2014) introduces -time algorithms for computing both and .
- Success Probability Computation: The exact probability is reduced to volume computations of truncated simplices . Inductive integration defines polynomials with recurrence
where and . From , the table is computed in operations. The truncated simplex volume yields .
- Enumeration Cost Computation: For even , , using the volume of the -ball of radius . For odd , is bounded by linear interpolation between neighboring even slices. The cost is assembled by summing these terms, terminating early if a partial sum already exceeds the current best cost.
Pseudocode for the overall cost computation routine is provided in the source and is directly implemented as described (Aono, 2014).
4. Optimization of Pruning Coefficients
A core contribution is a practical method for finding near-optimal for any relevant choice of dimension , block-size , and target success probability . This is achieved as follows:
- For each in a grid (, etc.), a randomized "perturb-and-modify" search optimizes 16 defining points . These anchor points are spline-interpolated to obtain the full sequence and the result is constrained so that .
- The table below illustrates sample optimized defining points for , , and :
| (=60) | (=80) | (=100) | (=120) | (=140) | |
|---|---|---|---|---|---|
| 0 | 0.0214 | 0.01641 | 0.0324 | 0.0098 | 0.1318 |
| 1 | 0.1208 | 0.1385 | 0.1270 | 0.1437 | 0.1859 |
| ... | ... | ... | ... | ... | ... |
| 16 | 1.0000 | 1.0007 | 1.0000 | 1.0000 | 1.0000 |
- Direct use of interpolated may yield differing from by up to 10%. A “modifying constant” is introduced to blend lower and upper probability bounds so that . At runtime, the pruning curve is linearly blended between the nearest precomputed tables according to interpolated .
Empirical error in after this procedure is 1% in computational steps for .
5. Structure and Behavior of Optimized Pruning Curves
Optimized pruning curves as a function of exhibit characteristic features:
- For all practical dimensions and , rises slowly up to a "knee" in the interval , then increases sharply to near .
- Lower allows for tighter pruning (smaller radii), while higher necessitates less aggressive pruning to guarantee the target probability of success.
6. Practical Implementation of Probability Redistribution Pruning
The procedure for implementing probability redistribution pruning follows directly from the algorithmic description:
- Precompute or download the coefficient table for the block-size and target .
- For the given lattice dimension , employ spline and modifying-constant interpolation to compute defining points and corresponding pruning radii, enforcing .
- Compute the search radius as the Gaussian heuristic using Gram-Schmidt lengths.
- At each node in enumeration, prune if for depth .
- In dynamic BKZ routines, update the pruning coefficients whenever the basis or block-size changes.
- Optionally, validate the empirical survival probability against using random directions on and make minor adjustments.
The entire workflow leverages the cost-and-probability subroutines and interpolation to generate near-optimal pruning schemes efficiently (Aono, 2014).
7. Significance and Broader Context
Probability redistribution pruning, grounded in the Gama–Nguyen–Regev framework, provides a principled methodology for balancing enumeration cost with success probability in high-dimensional lattice problems. The algorithmic contributions in efficient probability and cost evaluation, as well as interpolation-based coefficient synthesis, enable practical deployment in lattice reduction and SVP solvers, especially in blockwise reduction frameworks like BKZ. The empirically-validated error bounds and rapid runtime underline its relevance for cryptanalytic applications and research on the hardness of lattice problems.