Papers
Topics
Authors
Recent
Search
2000 character limit reached

FluidFormer: Transformer with Continuous Convolution for Particle-based Fluid Simulation

Published 3 Aug 2025 in cs.CE, cs.GR, cs.LG, and physics.flu-dyn | (2508.01537v1)

Abstract: Learning-based fluid simulation networks have been proven as viable alternatives to traditional numerical solvers for the Navier-Stokes equations. Existing neural methods follow Smoothed Particle Hydrodynamics (SPH) frameworks, which inherently rely only on local inter-particle interactions. However, we emphasize that global context integration is also essential for learning-based methods to stabilize complex fluid simulations. We propose the first Fluid Attention Block (FAB) with a local-global hierarchy, where continuous convolutions extract local features while self-attention captures global dependencies. This fusion suppresses the error accumulation and models long-range physical phenomena. Furthermore, we pioneer the first Transformer architecture specifically designed for continuous fluid simulation, seamlessly integrated within a dual-pipeline architecture. Our method establishes a new paradigm for neural fluid simulation by unifying convolution-based local features with attention-based global context modeling. FluidFormer demonstrates state-of-the-art performance, with stronger stability in complex fluid scenarios.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (3)

Collections

Sign up for free to add this paper to one or more collections.