2000 character limit reached
Generative model benchmarks for superconducting qubits
Published 24 Nov 2018 in quant-ph | (1811.09905v2)
Abstract: In this work we experimentally demonstrate how generative model training can be used as a benchmark for small ($<5$ qubits) quantum devices. Performance is quantified using three data analytic metrics: the Kullbeck-Leiber divergence, and two adaptations of the F1 score. Using the $2\times2$ Bars and Stripes dataset, we determine optimal circuit constructions for generative model training on superconducting qubits by including hardware connectivity constraints into circuit design. We show that on noisy hardware sparsely connected, shallow circuits out-perform denser counterparts.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.