Papers
Topics
Authors
Recent
Search
2000 character limit reached

Efficient Seismic Data Interpolation via Sparse Attention Transformer and Diffusion Model

Published 9 Jun 2025 in physics.geo-ph | (2506.07923v1)

Abstract: Seismic data interpolation is a critical pre-processing step for improving seismic imaging quality and remains a focus of academic innovation. To address the computational inefficiencies caused by extensive iterative resampling in current plug-and-play diffusion interpolation methods, we propose the diffusion-enhanced sparse attention transformer (Diff-spaformer), a novel deep learning framework. Our model integrates transformer architectures and diffusion models via a Seismic Prior Extraction Network (SPEN), which serves as a bridge module. Full-layer sparse multi-head attention and feed-forward propagation capture global information distributions, while the diffusion model provides robust prior guidance. To mitigate the computational burden of high-dimensional representations, self-attention is computed along the channel rather than the spatial dimension. We show that using negative squared Euclidean distance to compute sparse affinity matrices better suits seismic data modeling, enabling broader contribution from amplitude feature nodes. An adaptive ReLU function further discards low or irrelevant self-attention values. We conduct training within a single-stage optimization framework, requiring only a few reverse diffusion sampling steps during inference. Extensive experiments demonstrate improved interpolation fidelity and computational efficiency for both random and continuous missing data, offering a new paradigm for high-efficiency seismic data reconstruction under complex geological conditions.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.