Papers
Topics
Authors
Recent
Search
2000 character limit reached

Transformer Wave Function for two dimensional frustrated magnets: emergence of a Spin-Liquid Phase in the Shastry-Sutherland Model

Published 28 Nov 2023 in cond-mat.str-el and cond-mat.dis-nn | (2311.16889v4)

Abstract: Understanding quantum magnetism in two-dimensional systems represents a lively branch in modern condensed-matter physics. In the presence of competing super-exchange couplings, magnetic order is frustrated and can be suppressed down to zero temperature. Still, capturing the correct nature of the exact ground state is a highly complicated task, since energy gaps in the spectrum may be very small and states with different physical properties may have competing energies. Here, we introduce a variational Ansatz for two-dimensional frustrated magnets by leveraging the power of representation learning. The key idea is to use a particular deep neural network with real-valued parameters, a so-called Transformer, to map physical spin configurations into a high-dimensional feature space. Within this abstract space, the determination of the ground-state properties is simplified and requires only a shallow output layer with complex-valued parameters. We illustrate the efficacy of this variational Ansatz by studying the ground-state phase diagram of the Shastry-Sutherland model, which captures the low-temperature behavior of SrCu$_2$(BO$_3$)$_2$ with its intriguing properties. With highly accurate numerical simulations, we provide strong evidence for the stabilization of a spin-liquid between the plaquette and antiferromagnetic phases. In addition, a direct calculation of the triplet excitation at the $\Gamma$ point provides compelling evidence for a gapless spin liquid. Our findings underscore the potential of Neural-Network Quantum States as a valuable tool for probing uncharted phases of matter, and open up new possibilities for establishing the properties of many-body systems.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (35)
  1. R. B. Laughlin, Phys. Rev. Lett. 50, 1395 (1983).
  2. L. Savary and L. Balents, Rep. Prog. Phys. 80, 016502 (2017).
  3. A. Kitaev, Ann. Phys. 321, 2 (2006), january Special Issue.
  4. M. Norman, Rev. Mod. Phys. 88, 041002 (2016).
  5. B. Shastry and B. Sutherland, Physica B+C 108, 1069 (1981).
  6. S. Miyahara and K. Ueda, Phys. Rev. Lett. 82, 3701 (1999).
  7. P. Corboz and F. Mila, Phys. Rev. Lett. 112, 147203 (2014).
  8. M. Albrecht and F. Mila, Europhysics Letters 34, 145 (1996).
  9. A. Koga and N. Kawakami, Phys. Rev. Lett. 84, 4461 (2000).
  10. P. Corboz and F. Mila, Phys. Rev. B 87, 115144 (2013).
  11. W.-Y. Liu, X.-T. Zhang, Z. Wang, S.-S. Gong, W.-Q. Chen,  and Z.-C. Gu, “Deconfined quantum criticality with emergent symmetry in the extended shastry-sutherland model,”  (2023), arXiv:2309.10955 [cond-mat.str-el] .
  12. A. Keleş and E. Zhao, Phys. Rev. B 105, L041115 (2022).
  13. F. Becca and S. Sorella, Quantum Monte Carlo Approaches for Correlated Systems (Cambridge University Press, 2017).
  14. P. Fazekas and P. Anderson, Phil. Mag. 30, 423 (1974).
  15. P. Anderson, Science 235, 1196 (1987).
  16. Z. Zhu and S. R. White, Phys. Rev. B 92, 041105 (2015).
  17. G. Carleo and M. Troyer, Science 355, 602 (2017).
  18. A. Szabó and C. Castelnovo, Phys. Rev. Res. 2, 033075 (2020).
  19. M. Hibat-Allah, R. G. Melko,  and J. Carrasquilla, “Supplementing recurrent neural network wave functions with symmetry and annealing to improve accuracy,”  (2022), arXiv:2207.14314 [cond-mat.dis-nn] .
  20. A. Chen and M. Heyl, “Efficient optimization of deep neural quantum states toward machine precision,”  (2023), arXiv:2302.01941 [cond-mat.dis-nn] .
  21. M. Mezera, J. Menšíková, P. Baláž,  and M. Žonda, “Neural network quantum states analysis of the shastry-sutherland model,”  (2023), arXiv:2303.14108 [cond-mat.dis-nn] .
  22. R. Rende, L. L. Viteritti, L. Bardone, F. Becca,  and S. Goldt, “A simple linear algebra identity to optimize large-scale neural network quantum states,”  (2023a), arXiv:2310.05715 [cond-mat.str-el] .
  23. Y. Nomura and M. Imada, Phys. Rev. X 11, 031034 (2021).
  24. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit,  and N. Houlsby, “An image is worth 16x16 words: Transformers for image recognition at scale,”  (2021).
  25. K. Sprague and S. Czischek, “Variational monte carlo with large patched transformers,”  (2023), arXiv:2306.03921 [quant-ph] .
  26. Y. Bengio, A. Courville,  and P. Vincent, “Representation learning: A review and new perspectives,”  (2014), arXiv:1206.5538 [cs.LG] .
  27. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. Gomez, L. Kaiser,  and I. Polosukhin, “Attention is all you need,”  (2017).
  28. X. Liang, M. Li, Q. Xiao, H. An, L. He, X. Zhao, J. Chen, C. Yang, F. Wang, H. Qian, L. Shen, D. Jia, Y. Gu, X. Liu,  and Z. Wei, “21296superscript212962^{1296}2 start_POSTSUPERSCRIPT 1296 end_POSTSUPERSCRIPT exponentially complex quantum many-body simulation via scalable deep learning method,”  (2022), arXiv:2204.07816 [quant-ph] .
  29. S. Sorella, Phys. Rev. B 71, 241103 (2005).
  30. Y. Nomura, Journal of Physics: Condensed Matter 33, 174003 (2021).
  31. W. Marshall, Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences 232, 48 (1955).
  32. L. McInnes, J. Healy,  and J. Melville, “Umap: Uniform manifold approximation and projection for dimension reduction,”  (2020), arXiv:1802.03426 [stat.ML] .
  33. R. Rende, F. Gerace, A. Laio,  and S. Goldt, “Optimal inference of a generalised potts model by single-layer transformers with factored attention,”  (2023b), arXiv:2304.07235 .
  34. R. Xiong, Y. Yang, D. He, K. Zheng, S. Zheng, C. Xing, H. Zhang, Y. Lan, L. Wang,  and T.-Y. Liu, “On layer normalization in the transformer architecture,”  (2020), arXiv:2002.04745 [cs.LG] .
  35. Z. Dai, H. Liu, Q. V. Le,  and M. Tan, “Coatnet: Marrying convolution and attention for all data sizes,”  (2021), arXiv:2106.04803 .
Citations (17)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 2 tweets with 2 likes about this paper.