Scaling of optimal convex-approximation distance with multiple channel uses

Determine an explicit expression for the dependence on the number of uses N of the minimal diamond-norm distance between the N-fold tensor product channel Φ^{⊗ N} and its optimal convex approximation formed from the convex hull of tensor-product channels ⊗_{j=1}^N Ψ_{i_j} generated by a fixed set of single-system channels {Ψ_i}. The goal is to characterize how D_{ {⊗_{j=1}^N Ψ_{i_j}} }(Φ^{⊗ N}) scales with N, taking into account that the diamond norm is not additive or multiplicative across copies and that correlations in the approximating convex mixture may improve the approximation.

Background

The paper studies how to optimally approximate an unavailable target quantum channel Φ by convex mixing of a given set of available channels {Ψi}, quantifying approximation quality via the diamond norm. The single-use optimal convex approximation is defined by minimizing ||Φ − ∑_i p_i Ψ_i||⋄ over probability weights {p_i}.

In the Conclusions, the authors extend the discussion to N parallel uses of the target channel, i.e., Φ{⊗ N}, with available operations that act independently on each subsystem. They observe that allowing correlated convex mixtures over tensor-product channels ⊗{j=1}N Ψ{i_j} can improve approximation quality and that the diamond norm is not additive/multiplicative across copies. This leads to an unresolved question: there is no direct expression for how the optimal convex-approximation distance scales with N. They suggest that techniques related to quantum Chernoff bounds for channels might be relevant for resolving this scaling behavior.

References

This also implies that we do not have a direct expression for the scaling with N of the distance between a quantum channel and its convex approximations.

Convex approximations of quantum channels  (1709.03805 - Sacchi et al., 2017) in Conclusions