Papers
Topics
Authors
Recent
Search
2000 character limit reached

GPUTB-2:An efficient E(3) network method for learning high-precision orthogonal Hamiltonian

Published 20 Jan 2026 in cond-mat.mtrl-sci | (2601.13656v1)

Abstract: Although equivariant neural networks have become a cornerstone for learning electronic Hamiltonians, the intrinsic non-orthogonality of linear combinations of atomic orbitals (LCAO) basis sets poses a fundamental challenge. The computational cost of Hamiltonian orthogonalization scales as O(N3), which severely hinders electronic structure calculations for large-scale systems containing hundreds of thousands to millions of atoms. To address this issue, we develop GPUTB-2, a framework that learns implicitly orthogonality-preserving Hamiltonians by training directly on electronic band structures. Benefiting from an E(3)-equivariant network accelerated by Gaunt tensor product and SO(2) tensor product layers, GPUTB-2 achieves significantly higher accuracy than GPUTB across multiple benchmark systems. Moreover, GPUTB-2 accurately predicts large-scale electronic structures, including transport properties of temperature-perturbed SnSe and the band structures of magic-angle twisted bilayer graphene. By further integrating this framework with the linear-scaling quantum transport (LSQT) method, we investigate the electronic properties of million-atom amorphous graphene and uncover pressure-induced electronic structure transitions in more complex amorphous silicon. Together, these results establish GPUTB-2 as a high-accuracy and scalable approach for predicting orthogonal Hamiltonians.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.