Papers
Topics
Authors
Recent
Search
2000 character limit reached

MPS-Encoded Functions: Theory and Applications

Updated 5 December 2025
  • MPS-encoded functions are tensor network representations of classical functions that map amplitudes to quantum states with controlled entanglement.
  • Algorithms like IMPS, MPD, and TCI enable efficient state preparation by reducing circuit depth and gate counts through adaptive SVD and disentangler methods.
  • Applications span quantum simulation, finance, and image encoding, where hardware-adaptive MPS techniques ensure high fidelity and scalable resource usage.

MPS-Encoded Functions

Matrix Product State (MPS)-encoded functions are function representations structured as quantum states whose amplitudes—or, equivalently, data—are organized via MPS tensor network decompositions. This framework is central for efficiently mapping classical functions, probability distributions, and even structured datasets onto quantum states, an essential subroutine in many practical quantum algorithms. The development, algorithmic refinement, and performance of MPS-encoded function methodologies have established this approach as the leading paradigm for state preparation with rigorous control over resource scaling, circuit depth, and entanglement entropy.

1. Formalism of MPS-Encoded Functions

Given a classical function f(x)f(x) defined on a discrete %%%%1%%%%-point grid, its amplitude-encoded quantum state is

ψf=1Zx=02n1f(x)x,|\psi_f\rangle = \frac{1}{\sqrt{Z}} \sum_{x=0}^{2^n-1} f(x)\,|x\rangle,

with ZZ the normalization. The MPS ansatz recasts ψf|\psi_f\rangle as

ψ=i1,,in{0,1}Tr[Ai1(1)Ai2(2)Ain(n)]i1in,|\psi\rangle = \sum_{i_1,\ldots,i_n\in\{0,1\}} \mathrm{Tr}[A^{(1)}_{i_1}A^{(2)}_{i_2}\cdots A^{(n)}_{i_n}]\,|i_1\cdots i_n\rangle,

where Aik(k)A^{(k)}_{i_k} are matrices of size χk1×χk\chi_{k-1} \times \chi_k and χk\chi_k is the bond dimension across the kk-th cut (χ0=χn=1\chi_0 = \chi_n = 1). The expressivity and manipulability of the encoding are governed by the maximum bond dimension χ=maxkχk\chi = \max_k \chi_k (Wang et al., 18 Aug 2025, Green et al., 23 Feb 2025).

2. Entanglement Scaling and Bond Dimension Control

For fC([0,1])f\in C^{\infty}([0,1]), the decay of Schmidt coefficients across an MPS bond is rigorously given by

pk=1g1(f)64k+O(8k),p_k = 1 - \frac{g_1(f)}{6\cdot 4^k} + O(8^{-k}),

where g1(f)g_1(f) depends on the L2L_2-norm of ff' and its cross-terms (Bohun et al., 2024). The subleading singular values decay as Λk,12kg1(f)/12\Lambda_{k,1} \sim 2^{-k}\sqrt{g_1(f)/12}, yielding entropy Sk=O(k4k)S_k = O\left(\frac{k}{4^k}\right) at large kk. Thus, for smooth ff, only a small χk\chi_k is required for high-fidelity approximation: χk=2\chi_k = 2 suffices asymptotically, independently of grid size. For functions with rr derivatives, χkr+2\chi_k \leq r + 2 on O(1)O(1) initial bonds, then χk=1\chi_k=1 elsewhere.

For non-smooth, localized, or heavy-tailed functions, the universal decay transitions at a problem-dependent scale. For instance, exponentially localized ff exhibits super-exponential decay of entanglement; power-law tails cause slower, polynomial decay, requiring higher ranks before the universal regime is reached (Bohun et al., 2024).

3. MPS-Based State Preparation Algorithms

Efficient preparation of MPS-encoded states has advanced via the following leading algorithms:

Improved MPS (IMPS) and Disentangler Methods

The improved MPS protocol extracts shallow quantum circuits by recursively applying two-qubit disentangler gates that leverage SVD decompositions over pairs of qubits. Given a function class and its MPS representation, the algorithm proceeds by:

  • Forming 4×2n24 \times 2^{n-2} matrices over disjoint pairs of qubits and applying SVD.
  • Utilizing a parallel contraction strategy, e.g., on a tree or hypercube, to exponentially reduce circuit depth to O(logn)O(\log n) on all-to-all topologies or O(n)O(\sqrt{n}) on planar grids.
  • Exploiting a structural reduction to 2-CNOT two-qubit unitaries per disentangler, achieving a 33% CNOT count savings (e.g., $2(n-1)$ two-qubit gates for nn qubits) (Wang et al., 18 Aug 2025).

Matrix Product Disentangler (MPD) and Tensor Network Optimization (TNO)

The MPD algorithm constructs an O(n)O(n)-depth circuit with no ancilla overhead via:

  • Truncated SVD on each nn-qubit cut, projecting to χ=2\chi=2 MPS and identifying a sequence of n1n-1-two-qubit gates per layer.
  • Layered application and inversion of these circuits, iterating L=O(1)L = O(1)O(log2χ)O(\log_2 \chi) times.
  • Optionally adding TNO (e.g., via L-BFGS-B) to further boost fidelity (Green et al., 23 Feb 2025).

For low-degree piecewise polynomials (dd), the exact MPS construction requires only χI(d+1)\chi \leq I(d+1), allowing >99.99%>99.99\% fidelity at n10n \sim 10–$20$, with gate counts scaling as O(nL)O(n L).

Tensor Cross Interpolation (TCI)

TCI provides an oracle-based method for building MPS representations by adaptive sampling, obviating the need to store the full 2n2^n vector. The core steps involve constructing interpolation matrices, applying the max-volume rows/columns principle, and extracting TT-cores. TCI achieves O(nχ2)O(n\chi^2) complexity in both queries and storage, with uniform error δ\delta by construction (Bohun et al., 2024).

4. Function Class Examples, Explicit Constructions, and Circuit Depth

Classes of f(x)f(x) with bounded and/or small MPS rank:

  • Gaussian g1(x)=exp(x2/2)g_1(x) = \exp(-x^2/2): χ=1\chi=1, circuit depth =1=1, nn RyR_y rotations in parallel.
  • Low-degree polynomial p(x)=axd+p(x) = ax^d+ …: χd+1\chi \leq d+1; linear f(x)=xf(x)=x has χ=2\chi=2, requiring depth O(logn)O(\log n), $2(n-1)$ two-qubit gates.
  • Log-normal, financial payoffs: often factorizable, achieving χ=1\chi=1 or products thereof.
  • Heavy-tailed, Lévy-stable distributions: larger k0log2(L/c)k_0 \sim \log_2(L/c) preludes the universal regime, but still maxkχk4\max_k\chi_k\leq 4 is sufficient for high-dimensional cases (Bohun et al., 2024, Wang et al., 18 Aug 2025).

A table summarizing typical bond dimensions and circuit resources:

Function Class MPS Bond Dimension (χ\chi) Circuit Depth / Gates
Gaussian 1 1 (all RyR_y in parallel)
Linear (p(x)=xp(x)=x) 2 O(logn)O(\log n), $2(n-1)$ 2QG
Quadratic (dd=2) or dd-poly d+1d+1 O(logn)O(\log n), O(n)O(n)
Heavy-tailed (Lévy) 3–4 O(n)O(n)

2QG: two-qubit gates

5. Numerical Performance, Scaling, and Hardware Considerations

Rigorous numerical benchmarks validate that IMPS/MPD circuits routinely achieve infidelities <104<10^{-4} using linear (or better) depth and sublinear gate counts for practical function classes:

  • At n=12n=12, IMPS hypercube scheduling reduces UU-depth from 11\sim 11 (chain) to $3$, with infidelity improvements by 1–2 orders of magnitude at equal depth.
  • The optimized 2-CNOT decomposition matches 3-CNOT variants in fidelity, at a 33% reduction in two-qubit gate count (Wang et al., 18 Aug 2025).
  • On 2D grids (e.g., 3×43\times 4), depth contracted from 11 to 5, yielding higher fidelity at lower hardware overhead.
  • Large-scale experiments up to n=64n=64 qubits (using IBM Q devices) confirmed F>0.97F>0.97 with L=1L=1–2 layers, demonstrating viability even under device noise for practical nn (Bohun et al., 2024).

For piecewise polynomials (I10I\leq 10, d4d\leq 4), exact MPS or truncated models achieve fidelities >99.99%>99.99\% for nn up to 20 without ancillary qubits (Green et al., 23 Feb 2025).

6. Applications and Impact in Quantum and Classical Computation

MPS-encoded functions underpin numerous quantum algorithms that require efficiently loaded classical data, especially in quantum finance, simulation, and linear systems. Notably:

  • PDE solution via quantum-inspired MPS representations surpasses full-vector methods in both time and memory, especially with DMRG and Arnoldi global solvers, achieving exponential resource savings (García-Molina et al., 2023).
  • In image encoding, MPS approximations of discrete wavelet transforms allow for preparation of high-resolution (128×128128\times 128) images (e.g., ChestMNIST, n=14n=14) with circuit depth <500<500, fidelity exceeding 99.1%99.1\% (Green et al., 23 Feb 2025).
  • Universal, smooth, and localized function classes mapped to amplitude-encoded quantum states with systematically controllable error.

7. Practical Guidelines and Theoretical Implications

The key principles for practice and design are:

  • Small bond dimension is guaranteed by the entanglement area law for smooth and localized f(x)f(x), allowing shallow circuits for relevant classes.
  • Hardware-adaptivity: IMPS and variants can be scheduled to match device connectivity, achieving optimal unitarity depth and parallelism.
  • Error control is achieved by direct manipulation of the MPS bond dimension and Schmidt spectrum truncation, with variational bounds ensuring target fidelity.

A plausible implication is that the MPS encoding framework, when combined with hardware parity and adaptive optimization (TNO, TCI), represents the most scalable method for quantum state preparation with prescribed fidelity for smooth and structured classical data.

References

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to MPS-Encoded Functions.