Papers
Topics
Authors
Recent
Search
2000 character limit reached

SlepNet: Spectral Subgraph Representation Learning for Neural Dynamics

Published 19 Jun 2025 in cs.LG | (2506.16602v1)

Abstract: Graph neural networks have been useful in machine learning on graph-structured data, particularly for node classification and some types of graph classification tasks. However, they have had limited use in representing patterning of signals over graphs. Patterning of signals over graphs and in subgraphs carries important information in many domains including neuroscience. Neural signals are spatiotemporally patterned, high dimensional and difficult to decode. Graph signal processing and associated GCN models utilize the graph Fourier transform and are unable to efficiently represent spatially or spectrally localized signal patterning on graphs. Wavelet transforms have shown promise here, but offer non-canonical representations and cannot be tightly confined to subgraphs. Here we propose SlepNet, a novel GCN architecture that uses Slepian bases rather than graph Fourier harmonics. In SlepNet, the Slepian harmonics optimally concentrate signal energy on specifically relevant subgraphs that are automatically learned with a mask. Thus, they can produce canonical and highly resolved representations of neural activity, focusing energy of harmonics on areas of the brain which are activated. We evaluated SlepNet across three fMRI datasets, spanning cognitive and visual tasks, and two traffic dynamics datasets, comparing its performance against conventional GNNs and graph signal processing constructs. SlepNet outperforms the baselines in all datasets. Moreover, the extracted representations of signal patterns from SlepNet offers more resolution in distinguishing between similar patterns, and thus represent brain signaling transients as informative trajectories. Here we have shown that these extracted trajectory representations can be used for other downstream untrained tasks. Thus we establish that SlepNet is useful both for prediction and representation learning in spatiotemporal data.

Summary

  • The paper introduces a novel framework replacing global Fourier bases with Slepian harmonics to capture localized neural dynamics on graphs.
  • It employs an attention-based subgraph mask and differentiable eigendecomposition to focus on active regions and ensure robust learning.
  • Empirical results demonstrate significant improvements in classification accuracy and interpretability over traditional spectral and spatial GNN methods.

SlepNet: Spectral Subgraph Representation Learning for Neural Dynamics

SlepNet introduces a novel approach to modeling spatiotemporal signals on graphs by replacing traditional graph Fourier harmonics with Slepian harmonics as the foundation for spectral filtering in graph neural networks. This architecture addresses the longstanding limitations of classical GCNs and graph signal processing methods, which typically rely on global Fourier bases ill-suited to capturing signals that are localized or transient in specific subgraphs, a ubiquitous feature in applications such as neuroscience.

Motivation and Theoretical Grounding

Conventional spectral GNNs are limited by the non-localized nature of their Fourier harmonic bases, complicating the representation of neural signals localized to specific anatomical regions. While graph wavelets offer better spatial localization, they result in non-canonical, and often leaky, representations susceptible to information spillover at subgraph boundaries. Slepian harmonics, by contrast, are optimally concentrated within a specified subgraph and simultaneously bandlimited in the graph spectral domain, yielding robust and interpretable bases for localized signal representation.

In SlepNet, the learning framework is explicitly constructed to automatically discover relevant subgraphs (e.g., functionally active brain regions) and compute Slepian harmonics focused on those subgraphs. The resulting representation is both spatially and spectrally concentrated, enabling fine-grained modeling of local neural dynamics and facilitating downstream interpretability.

Architectural Innovations

SlepNet comprises two principal modules:

  1. Attention-based Subgraph Mask Learning
    • An attention mechanism infers a soft mask over graph nodes, identifying candidate subgraphs most relevant to the task. This attention operates at the node-cluster level via spectral clustering, improving both interpretability and regularization. The mask is adaptive to input features, supporting dynamic focus in temporal (e.g., fMRI) datasets.
  2. Slepian-based Spectral Filtering
    • With the identified subgraph, SlepNet computes Slepian harmonics as an orthonormal basis for localized spectral filtering, using either the classical (energy concentration) or modified embedded distance criteria. The formation of these harmonics involves solving a constrained eigenproblem—this is executed with differentiable eigendecomposition during backpropagation, and for large graphs, efficiently approximated by neural eigenmapping.

The network iteratively applies Slepian-filtered layers with nonlinearities, culminating in a task-specific prediction head. The architecture admits both binary and multi-class classification, as well as the extraction of intermediate, temporally-resolved graph embeddings.

Implementation Considerations

  • Computational Efficiency: Exact Slepian construction via eigendecomposition scales poorly with graph size (O(N3)O(N^3)). SlepNet leverages neural eigenmapping, where a neural network is trained to map node identifiers to eigenvector coordinates, enabling fast out-of-sample extension and significant runtime savings (as demonstrated empirically).
  • Differentiable Eigendecomposition: To maintain end-to-end trainability, gradients of eigenvectors are computed using established methods for symmetric matrices. Perturbation regularization is applied to ensure numerical stability in the presence of closely spaced eigenvalues.
  • Interpretability: The learned subgraph masks are directly interpretable and can be projected onto anatomical structures (e.g., cortical atlases in fMRI studies); in synthetic data experiments, learned masks align exactly with ground-truth subgraphs.
  • Hyperparameter Sensitivity: Performance improves with increasing numbers of Slepian vectors up to several hundred, indicating that expanding spectral bandwidth enhances discriminative power. Clustering granularity and mask regularization also influence interpretability and classification performance.

Empirical Results

The architecture is evaluated across several domains:

  • Neuroimaging: On fMRI datasets for psychiatric classification (OCD, ASD), SlepNet achieves substantially higher accuracy than Spectral GCN, GCN, GAT, GIN, and GraphSAGE. For instance, on OCD datasets, SlepNet-I attains 84.7–90.7% accuracy, outperforming all baselines by wide margins.
  • Synthetic and Real-world Graphs: In synthetic datasets designed for subgraph signal localization, SlepNet recovers subgraph structure with 100% accuracy. On traffic sensor datasets, SlepNet-II is competitive with baselines, achieving best or second-best performance.
  • Trajectory-level Representation: SlepNet embeddings, when visualized with methods such as T-PHATE, yield temporally coherent, highly curved trajectories capturing latent neural state transitions, in contrast to the smoother, less informative representations from classical GCN embeddings or direct dimensionality reduction.
  • Downstream Utility: SlepNet embeddings enable informative downstream tasks (e.g., predicting subject sex from neural trajectories), demonstrating their general expressive power.

Claims and Ablations

  • SlepNet outperforms all tested spectral and spatial GNNs and graph wavelet-based methods in both primary and downstream classification tasks on temporal graph data.
  • The model produces representations of neural dynamics with higher curvature—indicative of richer, more detailed encoding of rapid state transitions—than alternatives.
  • In ablation studies, increasing the number of Slepian vectors monotonically improves primary task classification, underscoring the advantage of higher-resolution spectral representations.

Limitations and Future Work

  • Bandselectivity: Subgraph mask learning is fully adaptive, but spectral bandwidth (number of Slepian vectors) is currently fixed; making this selection end-to-end learnable could further enhance adaptivity.
  • Dynamics Modeling: While SlepNet produces rich representations of temporal trajectories, explicit generative or predictive models of underlying dynamics are not incorporated.
  • Scalability: Although neural eigenmapping improves scalability, extremely large graphs (tens of thousands of nodes) may still challenge memory or computational constraints during training.

Broader Implications and Outlook

SlepNet represents a significant advance in localized signal representation and interpretability for graph-structured temporal data, with immediate applications in neuroscience (e.g., identifying and characterizing brain regions relevant to psychiatric conditions via fMRI time series) and other domains involving dynamic processes on networks (e.g., traffic, sensor networks). The subgraph-selective and spectrally precise representations could inform not only predictive pipelines but also scientific understanding of distributed neural computation.

Future research could integrate end-to-end bandwidth learning, parametric dynamical models using Slepian-encoded representations, and extensions to multi-modal or heterogeneous graphs. The interpretability of learned masks positions SlepNet as a candidate for explainable AI in clinical and scientific applications. Integrating causal discovery frameworks and generative modeling with Slepian-based architectures may yield further insights into neural dynamics and other complex systems.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.