MPS-Encoded Functions: Theory and Applications
- MPS-encoded functions are tensor network representations of classical functions that map amplitudes to quantum states with controlled entanglement.
- Algorithms like IMPS, MPD, and TCI enable efficient state preparation by reducing circuit depth and gate counts through adaptive SVD and disentangler methods.
- Applications span quantum simulation, finance, and image encoding, where hardware-adaptive MPS techniques ensure high fidelity and scalable resource usage.
MPS-Encoded Functions
Matrix Product State (MPS)-encoded functions are function representations structured as quantum states whose amplitudes—or, equivalently, data—are organized via MPS tensor network decompositions. This framework is central for efficiently mapping classical functions, probability distributions, and even structured datasets onto quantum states, an essential subroutine in many practical quantum algorithms. The development, algorithmic refinement, and performance of MPS-encoded function methodologies have established this approach as the leading paradigm for state preparation with rigorous control over resource scaling, circuit depth, and entanglement entropy.
1. Formalism of MPS-Encoded Functions
Given a classical function defined on a discrete %%%%1%%%%-point grid, its amplitude-encoded quantum state is
with the normalization. The MPS ansatz recasts as
where are matrices of size and is the bond dimension across the -th cut (). The expressivity and manipulability of the encoding are governed by the maximum bond dimension (Wang et al., 18 Aug 2025, Green et al., 23 Feb 2025).
2. Entanglement Scaling and Bond Dimension Control
For , the decay of Schmidt coefficients across an MPS bond is rigorously given by
where depends on the -norm of and its cross-terms (Bohun et al., 2024). The subleading singular values decay as , yielding entropy at large . Thus, for smooth , only a small is required for high-fidelity approximation: suffices asymptotically, independently of grid size. For functions with derivatives, on initial bonds, then elsewhere.
For non-smooth, localized, or heavy-tailed functions, the universal decay transitions at a problem-dependent scale. For instance, exponentially localized exhibits super-exponential decay of entanglement; power-law tails cause slower, polynomial decay, requiring higher ranks before the universal regime is reached (Bohun et al., 2024).
3. MPS-Based State Preparation Algorithms
Efficient preparation of MPS-encoded states has advanced via the following leading algorithms:
Improved MPS (IMPS) and Disentangler Methods
The improved MPS protocol extracts shallow quantum circuits by recursively applying two-qubit disentangler gates that leverage SVD decompositions over pairs of qubits. Given a function class and its MPS representation, the algorithm proceeds by:
- Forming matrices over disjoint pairs of qubits and applying SVD.
- Utilizing a parallel contraction strategy, e.g., on a tree or hypercube, to exponentially reduce circuit depth to on all-to-all topologies or on planar grids.
- Exploiting a structural reduction to 2-CNOT two-qubit unitaries per disentangler, achieving a 33% CNOT count savings (e.g., $2(n-1)$ two-qubit gates for qubits) (Wang et al., 18 Aug 2025).
Matrix Product Disentangler (MPD) and Tensor Network Optimization (TNO)
The MPD algorithm constructs an -depth circuit with no ancilla overhead via:
- Truncated SVD on each -qubit cut, projecting to MPS and identifying a sequence of -two-qubit gates per layer.
- Layered application and inversion of these circuits, iterating – times.
- Optionally adding TNO (e.g., via L-BFGS-B) to further boost fidelity (Green et al., 23 Feb 2025).
For low-degree piecewise polynomials (), the exact MPS construction requires only , allowing fidelity at –$20$, with gate counts scaling as .
Tensor Cross Interpolation (TCI)
TCI provides an oracle-based method for building MPS representations by adaptive sampling, obviating the need to store the full vector. The core steps involve constructing interpolation matrices, applying the max-volume rows/columns principle, and extracting TT-cores. TCI achieves complexity in both queries and storage, with uniform error by construction (Bohun et al., 2024).
4. Function Class Examples, Explicit Constructions, and Circuit Depth
Classes of with bounded and/or small MPS rank:
- Gaussian : , circuit depth , rotations in parallel.
- Low-degree polynomial : ; linear has , requiring depth , $2(n-1)$ two-qubit gates.
- Log-normal, financial payoffs: often factorizable, achieving or products thereof.
- Heavy-tailed, Lévy-stable distributions: larger preludes the universal regime, but still is sufficient for high-dimensional cases (Bohun et al., 2024, Wang et al., 18 Aug 2025).
A table summarizing typical bond dimensions and circuit resources:
| Function Class | MPS Bond Dimension () | Circuit Depth / Gates |
|---|---|---|
| Gaussian | 1 | 1 (all in parallel) |
| Linear () | 2 | , $2(n-1)$ 2QG |
| Quadratic (=2) or -poly | , | |
| Heavy-tailed (Lévy) | 3–4 |
2QG: two-qubit gates
5. Numerical Performance, Scaling, and Hardware Considerations
Rigorous numerical benchmarks validate that IMPS/MPD circuits routinely achieve infidelities using linear (or better) depth and sublinear gate counts for practical function classes:
- At , IMPS hypercube scheduling reduces -depth from (chain) to $3$, with infidelity improvements by 1–2 orders of magnitude at equal depth.
- The optimized 2-CNOT decomposition matches 3-CNOT variants in fidelity, at a 33% reduction in two-qubit gate count (Wang et al., 18 Aug 2025).
- On 2D grids (e.g., ), depth contracted from 11 to 5, yielding higher fidelity at lower hardware overhead.
- Large-scale experiments up to qubits (using IBM Q devices) confirmed with –2 layers, demonstrating viability even under device noise for practical (Bohun et al., 2024).
For piecewise polynomials (, ), exact MPS or truncated models achieve fidelities for up to 20 without ancillary qubits (Green et al., 23 Feb 2025).
6. Applications and Impact in Quantum and Classical Computation
MPS-encoded functions underpin numerous quantum algorithms that require efficiently loaded classical data, especially in quantum finance, simulation, and linear systems. Notably:
- PDE solution via quantum-inspired MPS representations surpasses full-vector methods in both time and memory, especially with DMRG and Arnoldi global solvers, achieving exponential resource savings (García-Molina et al., 2023).
- In image encoding, MPS approximations of discrete wavelet transforms allow for preparation of high-resolution () images (e.g., ChestMNIST, ) with circuit depth , fidelity exceeding (Green et al., 23 Feb 2025).
- Universal, smooth, and localized function classes mapped to amplitude-encoded quantum states with systematically controllable error.
7. Practical Guidelines and Theoretical Implications
The key principles for practice and design are:
- Small bond dimension is guaranteed by the entanglement area law for smooth and localized , allowing shallow circuits for relevant classes.
- Hardware-adaptivity: IMPS and variants can be scheduled to match device connectivity, achieving optimal unitarity depth and parallelism.
- Error control is achieved by direct manipulation of the MPS bond dimension and Schmidt spectrum truncation, with variational bounds ensuring target fidelity.
A plausible implication is that the MPS encoding framework, when combined with hardware parity and adaptive optimization (TNO, TCI), represents the most scalable method for quantum state preparation with prescribed fidelity for smooth and structured classical data.
References
- "Quantum State Preparation by Improved MPS Method" (Wang et al., 18 Aug 2025)
- "Quantum Encoding of Structured Data with Matrix Product States" (Green et al., 23 Feb 2025)
- "Entanglement scaling in matrix product state representation of smooth functions and their shallow quantum circuit approximations" (Bohun et al., 2024)
- "Global optimization of MPS in quantum-inspired numerical analysis" (García-Molina et al., 2023)