Papers
Topics
Authors
Recent
Search
2000 character limit reached

Optimizing Tensor Network Partitioning using Simulated Annealing

Published 28 Jul 2025 in quant-ph | (2507.20667v1)

Abstract: Tensor networks have proven to be a valuable tool, for instance, in the classical simulation of (strongly correlated) quantum systems. As the size of the systems increases, contracting larger tensor networks becomes computationally demanding. In this work, we study distributed memory architectures intended for high-performance computing implementations to solve this task. Efficiently distributing the contraction task across multiple nodes is critical, as both computational and memory costs are highly sensitive to the chosen partitioning strategy. While prior work has employed general-purpose hypergraph partitioning algorithms, these approaches often overlook the specific structure and cost characteristics of tensor network contractions. We introduce a simulated annealing-based method that iteratively refines the partitioning to minimize the total operation count, thereby reducing time-to-solution. The algorithm is evaluated on MQT Bench circuits and achieves an 8$\times$ average reduction in computational cost and an 8$\times$ average reduction in memory cost compared to a naive partitioning.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.