Papers
Topics
Authors
Recent
Search
2000 character limit reached

Foundations of Vector Retrieval

Published 17 Jan 2024 in cs.DS and cs.IR | (2401.09350v1)

Abstract: Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.

Citations (4)

Summary

  • The paper introduces novel quantization and sketching methods to reduce memory footprint while accelerating high-dimensional vector retrieval.
  • It details structured approaches like product, optimized, and additive quantization, balancing accuracy with computational cost.
  • It analyzes sketching strategies, including JL transforms and importance sampling, to maintain efficient approximations in large-scale systems.

Vector Compression and Sketching for Efficient Vector Retrieval

Introduction

The "Foundations of Vector Retrieval" monograph dedicates significant attention to the storage and computational efficiency challenges associated with large-scale vector databases, especially in high dimensions. The compression and sketching chapters present a comprehensive theoretical and algorithmic treatment of data-oblivious and data-driven methods for reducing memory footprint and computational cost, with a focus on quantization and sketching as primary paradigms for vector representation compression. This essay presents an in-depth synthesis of the design, properties, and analysis of such techniques as described in the monograph, with attention to consequences for retrieval systems, scalability, and new research opportunities.


Quantization as Structured Vector Compression

Quantization is introduced as an extension of clustering-based retrieval, formalizing the assignment of vectors to compact codebooks that capture data geometry in a lossy fashion. Letting ζ:Rd[C]\zeta: \mathbb{R}^d \rightarrow [C] denote a quantizer, all data vectors assigned to cluster ii are approximated by codeword μi\mu_i, leading to a compressed storage model with asymptotic size O(Cd+mlog2C)O(Cd + m \log_2 C) for a collection of mm vectors. This enables computation of point-to-query distances via inexpensive table lookups, rather than brute-force evaluation.

The reconstruction error for a codebook is the mean squared error E[μζ(U)U22]\mathbb{E}[\| \mu_{\zeta(U)} - U \|_{2}^{2}], which directly connects quantization to Lloyd-optimal KMeans clustering, guaranteeing asymptotic minimization of the quantization distortion under standard conditions. This establishes vector quantization as a theoretically sound approach for isotropic L2L_2 or L1L_1 spaces.

Product Quantization (PQ) further decomposes the ambient space into LL orthogonal subspaces via selector matrices Si{0,1}d×dS_i \in \{0,1\}^{d_\circ \times d} (with d=Ldd = L d_\circ). Per-subspace quantizers ζi\zeta_i independently assign block SiuS_i u of a vector uu to one of CC centroids μi,j\mu_{i,j}. The PQ-reconstructed vector is

u~=i=1Lμi,ζi(Siu)\tilde{u} = \bigoplus_{i=1}^{L} \mu_{i, \zeta_i(S_i u)}

and the overall quantization error is the sum of per-block cluster distortions due to orthogonality. For queries qq, distances to codewords can be precomputed for each subspace and individually summed, yielding a quantized lookup cost of O(LCd+mL)O(LC d_\circ + mL) for mm database vectors and codebook size L×C×dL \times C \times d_\circ.

This structure enables storage of massive databases with modest accuracy loss (e.g., PQ code sizes as small as 8-16 bytes per vector for d=128d=128), and allows asymptotic scaling far beyond what explicit or coarse quantization enables. PQ can be augmented with coarse quantization (clustering), residual quantization, or memory/disk-aware optimizations to further enhance efficiency. Figure 1

Figure 1: Illustration of the clustering-based retrieval method, highlighting cluster assignment and table lookup.

PQ's extension, Optimized Product Quantization (OPQ), relaxes the axis-aligned decomposition by learning an orthogonal rotation matrix RR to mahimize entropy of each subspace, via alternating minimization of the codebook assignment and the rotation. Subsequent advances, such as Locally Optimized Product Quantization and Composite Quantization, offer further reductions in distortion at increased computational cost.


Additive Quantization: Increased Representational Power

Additive Quantization (AQ) generalizes PQ by dispensing with subspace constraints, designing LL codebooks of full-dimensional (Rd\mathbb{R}^d) codewords. Each vector is represented as a sum of LL codewords, one per codebook (u~=i=1Lμi,ζi(u)\tilde{u} = \sum_{i=1}^{L} \mu_{i, \zeta_i(u)}). AQ contains PQ as a special case, as block-diagonal codebooks induce independent subspaces. In practice, AQ can achieve lower distortion per bit at the expense of a more expensive encoding process, typically solved via beam search or heuristic methods.

Distance computations for AQ require precomputing all inner products between query vectors and codewords, as well as codeword codeword pairs because of the non-orthogonal structure. This increases cache requirements but still allows efficient lookup-based evaluation.


Classical quantization is fundamentally isotropic and thus best suited for L2L_2 nearest neighbor retrieval. However, for MIPS and applications with highly anisotropic query distributions, Eq[qqT]\mathbb{E}_q [qq^{T}] may deviate drastically from a multiple of the identity. Directly minimizing Eq[(q,uq,u~)2]\mathbb{E}_q [ ( \langle q, u \rangle - \langle q, \tilde{u} \rangle )^2 ] reduces to a Mahalanobis distance minimization problem, as highlighted by [guo2016Quip], but in practice, precise estimation of this objective is costly without many training queries.

A score-aware quantization approach is proposed by [scann], replacing the uniform weighting of all vectors in X\mathcal{X} with a weighting function ω\omega that upweights database points that are likely to maximize q,u\langle q, u \rangle, ideally putting dominant mass on expected maximizers. The loss decomposes as

(u,u~,ω)=Eq[ω(q,u)(q,uu~)2]\ell(u, \tilde{u}, \omega) = \mathbb{E}_q[ \omega( \langle q, u \rangle ) ( \langle q, u - \tilde{u} \rangle )^2 ]

and, for spherically symmetric query distributions, admits separation into parallel and orthogonal components with respect to uu. For large dd, the parallel (norm) distortion dominates, implying that accurate preservation of codeword norms is more critical for high-probability MIPS error minimization than angular deviation, unless the data distribution is nearly magnitude-homogeneous. Figure 2

Figure 2: Decomposition of the residual error r(u,u~)r(u, \tilde{u}) into parallel and orthogonal components, relevant to different error contributions under MIPS.

This insight leads to codebooks optimized for weighted reconstruction errors or minimum parallel (norm) variance, and can be extended with learned query-dependent weights or codebooks [queryAwareQuantization], producing superior code utilization and retrieval accuracy.


Sketching: Oblivious Dimensionality Reduction

Linear Sketches via JL Transforms

Oblivious linear sketching, as typified by Johnson-Lindenstrauss (JL) transforms, projects vectors uRdu \in \mathbb{R}^d to ϕ(u)=Ru\phi(u) = Ru for random RRd×dR \in \mathbb{R}^{d_\circ \times d}, d=O(ε2logm)d_\circ = O(\varepsilon^{-2} \log m). When RR is appropriately designed (e.g., Gaussian or Rademacher), inner products and L2L_2 distances are preserved with additive error O(ϵ)O(\epsilon) with high probability. The variance of the estimator for u,v\langle u, v \rangle, as analyzed in the monograph, scales as O(1/d)(u22v22+u,v22ui2vi2)O(1/d_\circ) ( \|u\|_{2}^{2} \|v\|_{2}^{2} + \langle u, v \rangle^2 - 2 \sum u_i^2 v_i^2 ). For highly sparse vectors, the error is tolerable, but for dense high-dimensional data, the norm terms dominate and higher dd_\circ (hence larger sketches) are required.

Asymmetric Sketches

For sparse or streaming settings, [bruch2023sinnamon] proposes asymmetric sketches, storing, for each data vector, the set of nonzero coordinates' indices and per-hash upper/lower bounds of nonzero values across codewords. For a query qq, sparse or not, the asymmetric computation yields an upper bound on q,u\langle q, u \rangle by selecting the minimal upper-bounded value for each hashed nonzero coordinate, enabling aggressive filtering of data points in retrieval while ensuring recall. The overestimation error is theoretically characterized via the collision statistics of the hash maps and the distribution tails of active vector entries. Figure 2

Figure 2: Decomposition of the residual error r(u,u~)r(u, \tilde{u}) for parallel and orthogonal analysis in sketching.

Additionally, when applied in streaming environments, such sketches remain robust and incrementally updatable without requiring retraining or codebook relearning.

Importance-Sampled Sketches

[daliri2023sampling] further presents importance/geometric sampling sketches tailored to preserve inner products. Each coordinate ii of uu is included in the sketch with probability proportional to ui2/u22u_i^2/\|u\|_2^2, ensuring that coordinates with highest energy are favored, which aligns with inner product concentration. The sketch comprises the indices and values of selected coordinates (plus the squared norm), and an unbiased estimator for u,v\langle u, v \rangle is computed by scaling by selection probabilities. The variance of the estimator decays as O(1/d)O(1/d_\circ) and has a dependence on the intersection sparsity of uu and vv, favoring data distributions with significant overlap structure.


Performance, Tradeoffs, and Implementation Considerations

Complexity and Resource Footprint

Product quantization and its derivatives offer practically minimal database sizes, with entire codes fitting into cache or even L2 RAM for large mm at O(few bytes)O(\text{few bytes}) per vector. Lookup-based distance evaluation with precomputed codeword tables permits effective SIMD acceleration and vectorization, as exploited by [pqWithGPU, Andre_2021]. Both quantization and sketching can be batched, pipelined, and combined with hierarchical index structures (inverted files, multi-indexes) for further speedups.

Sketching techniques, particularly for sparse data, are tractable for streaming and distributed environments, as sketches can be computed online or in parallel with minimal coordination overhead and no codebook storage. Threshold sampling and asymmetric sketches can be implemented using fixed-size hash maps or Bloom-filtered arrays, providing robust error controls and near-constant-time updates.

Approximation and Error Analysis

Theoretical bounds on recall, error, and convergence rates for quantization and sketching are mathematically characterized, with key dependencies on sketch/quantization size (dd_\circ, CC, LL), data sparsity, codebook orthogonality, and alignment of query–database distributions. For MIPS, failure to preserve norm information can yield catastrophic errors (selecting high-norm but misaligned vectors). In such cases, score-aware or residual-enhanced quantizers are required.

Quantization-based approaches may require retraining when the underlying data distribution drifts or if query statistics change, whereas oblivious sketches retain their approximation guarantees.


Conclusion

The comprehensive analysis and unification of vector compression and sketching in the "Foundations of Vector Retrieval" monograph provide a critical theoretical and algorithmic toolkit for realizing scalable, resource-efficient, and robust vector retrieval systems. Product quantization and its extensions balance memory and accuracy in static high-dimensional settings, while sketching affords adaptive, streaming-compatible approximations with quantifiable error and resilience. The formalization of MIPS-specific loss and score-aware quantization objectives addresses prior limitations and better aligns with practical workloads in modern retrieval systems (e.g., large-scale ANN, embedding-based search, neural ranking).

Future developments will likely emphasize learned quantizaton tailored to structured and dynamic data distributions, tighter theoretical analysis of streaming sketches in adversarial settings, and the integration of sketching-quantization hybrids to leverage the strengths of both paradigms for emerging modalities and retrieval demands.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We're still in the process of identifying open problems mentioned in this paper. Please check back in a few minutes.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 28 tweets with 1373 likes about this paper.

HackerNews

  1. Foundations of Vector Retrieval (2 points, 0 comments) 
  2. Foundations of Vector Retrieval (Sebastian Bruch) (2 points, 0 comments) 
  3. Foundations of Vector Retrieval (1 point, 0 comments)