Papers
Topics
Authors
Recent
Search
2000 character limit reached

Fine-Tuning Unifies Foundational Machine-learned Interatomic Potential Architectures at ab initio Accuracy

Published 7 Nov 2025 in physics.chem-ph and physics.comp-ph | (2511.05337v1)

Abstract: This work demonstrates that fine-tuning transforms foundational machine-learned interatomic potentials (MLIPs) to achieve consistent, near-ab initio accuracy across diverse architectures. Benchmarking five leading MLIP frameworks (MACE, GRACE, SevenNet, MatterSim, and ORB) across seven chemically diverse compounds reveals that fine-tuning universally enhances force predictions by factors of 5-15 and improves energy accuracy by 2-4 orders of magnitude. The investigated models span both equivariant and invariant, as well as conservative and non-conservative, architectures. While general-purpose foundation models are robust, they exhibit architecture-dependent deviations from ab initio reference data; fine-tuning eliminates these discrepancies, enabling quantitatively accurate predictions of atomistic and structural properties. Using datasets constructed from equidistantly sampled frames of short ab initio molecular dynamics trajectories, fine-tuning reduces force errors by an order of magnitude and harmonizes performance across all architectures. These findings establish fine-tuning as a universal route to achieving system-specific predictive accuracy while preserving the computational efficiency of MLIPs. To promote widespread adoption, we introduce the aMACEing Toolkit, which provides a unified and reproducible interface for fine-tuning workflows across multiple MLIP frameworks.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 3 likes about this paper.