Papers
Topics
Authors
Recent
Search
2000 character limit reached

Directional Tensor Propagation in GNNs

Updated 12 January 2026
  • Directional tensor propagation is a method that embeds directional vectors and tensorial features into GNNs to ensure equivariant, symmetry-aware message passing.
  • It integrates geometric descriptors directly into edge-aware attention mechanisms to improve feature aggregation in applications like atomic and protein modeling.
  • Empirical results show that incorporating directional tensors enhances prediction accuracy and interpretability across materials science, biomolecular mapping, and code analysis.

Directional tensor propagation refers to the explicit modeling and transmission of directional information—encoded as vectors, tensors, or geometric descriptors—within edge-aware graph neural networks (GNNs) employing attention mechanisms. Recent edge-aware attention frameworks systematically integrate directional or tensorial features into both attention scoring and feature aggregation steps, enabling equivariant propagation of features that respect rotation, translation, and local structural geometry. Directional tensor propagation thus augments scalar node and edge features with geometric context, enhancing representation power and symmetry compliance in applications ranging from atomic structure prediction (Mangalassery et al., 8 Dec 2025), biomolecular interface mapping (Yang et al., 5 Jan 2026), to symbolic mathematical parsing and code graph analysis.

1. Theoretical Foundations

Modern edge-aware GNN architectures generalize classical message-passing networks by incorporating both scalar and tensorial edge descriptors into the propagation rule. Let hi(l)\mathbf{h}_i^{(l)} denote node features and eij\mathbf{e}_{ij} edge features (potentially multidimensional, containing direction vectors or displacement tensors). Directional information is encoded using local displacement vectors dij=rj−ri\mathbf{d}_{ij}=\mathbf{r}_j-\mathbf{r}_i, normalized direction vectors r^ij\hat{\mathbf{r}}_{ij}, or higher-order tensor features as in (Mangalassery et al., 8 Dec 2025) (materials) and (Yang et al., 5 Jan 2026) (proteins).

Directional tensor propagation operates within the attention mechanism, where score coefficients and aggregated messages explicitly combine node, edge, and geometric features: eij=σ(a⊤[Whi ∥ Whj ∥ eij new])e_{ij} = \sigma\left(a^\top \left[W\mathbf{h}_i \,\|\, W\mathbf{h}_j \,\|\, \mathbf{e}_{ij}^{\,\text{new}}\right]\right) The message-passing update then aggregates directional information: hinew=Dropout(ReLU(BatchNorm(ui)))\mathbf{h}_i^{\text{new}} = \text{Dropout}\left(\text{ReLU}(\text{BatchNorm}(\mathbf{u}_i))\right) where ui\mathbf{u}_i typically sums attention-weighted neighbor contributions, including geometrically updated edge features.

This framework ensures that directional (vector or tensor) features propagate equivariantly under rotations, and scalar features remain invariant, guaranteeing physical and geometric symmetry compliance essential for scientific applications (Mangalassery et al., 8 Dec 2025).

2. Edge-Aware Attention and Tensorial Features

Edge-aware attention mechanisms extend GAT-style layers by incorporating multidimensional edge features—often including directional vectors, local gradients, bond angles, and displacement tensors—directly into the attention score computation and message aggregation (Mangalassery et al., 8 Dec 2025Chen et al., 2021Yang et al., 5 Jan 2026). For instance:

  • In (Mangalassery et al., 8 Dec 2025), directional vectors dij∈R3\mathbf{d}_{ij}\in\mathbb{R}^{3} and normalized vectors r^ij\hat{\mathbf{r}}_{ij} are concatenated with scalar edge and node descriptors.
  • In protein binding site prediction, directional propagation is realized by maintaining an auxiliary tensor per node, sequentially updated by attention-weighted sums over directional edge vectors (Yang et al., 5 Jan 2026): pi(l+1)=∑j∈N(i)αij dijp_i^{(l+1)} = \sum_{j\in\mathcal{N}(i)} \alpha_{ij}\,d_{ij} where dijd_{ij} encodes both direction and magnitude.

Such propagation enables models to capture not only "who" interacts, but "how" (direction, orientation, geometry) those interactions occur.

3. Symmetry, Invariance, and Equivariance

Directional tensor propagation is critical for enforcing invariance (under translation, rotation) or equivariance (directional information transforms appropriately under geometric operations). Edge descriptors such as dijd_{ij} (distance), ciai−cjajc_ia_i-c_ja_j (elemental difference), and θˉij\bar{\theta}_{ij} (bond angle) are invariant to rigid motion, while vectorial features dij\mathbf{d}_{ij} and r^ij\hat{\mathbf{r}}_{ij} rotate equivariantly (Mangalassery et al., 8 Dec 2025). In physics-informed GNNs, this distinction ensures predictions depend only on relative geometry—never on absolute coordinates—yielding models that generalize across orientations and configurations.

Multi-head tensor propagation further enables the encoding of local spatial patterns and multi-scale directional phenomena, as required in complex domains such as atomic relaxations, biomolecular interfaces, and spatial graph modeling (Mangalassery et al., 8 Dec 2025Yang et al., 5 Jan 2026).

4. Architectural Manifestations Across Domains

Directional tensor propagation modules appear in diverse edge-aware GNN architectures:

  • Edge-Aware GAT for Materials: Physicochemical and geometric node/edge features are projected and passed through multi-head attention; edge features are updated via MLPs to incorporate directional tensors (Mangalassery et al., 8 Dec 2025).
  • Protein Binding Site Prediction: Atomic embeddings include directional context, propagated layer-wise using tensor updates. Residue-level pooling attentively merges atomic tensors to inform downstream classification (Yang et al., 5 Jan 2026).
  • Handwritten Expression Recognition: Edge-weighted Graph Attention Mechanisms concatenate and pool directional (stroke-relative) vectors for symbol and relation prediction (Xie et al., 2024).
  • Code Analysis: Dual semantic/structural node embeddings and edge-type tensors drive attention in code property graphs, supporting explicit distinction among program relations (Haque et al., 22 Jul 2025).

Architectural choices for directional tensor propagation involve the dimensionality of edge tensors, normalization protocols preserving directional information, and parallel aggregation of scalar and equivariant channels.

5. Empirical Performance and Physical Interpretation

Integrating directional tensor propagation improves both statistical accuracy and physical interpretability:

  • Atomic Relaxation: Edge-aware GATs achieved MAE of 0.09 AËš0.09\,\text{\AA} and RMSE $0.17$–0.18 AËš0.18\,\text{\AA} in position prediction, with near-isotropic directional errors (Mangalassery et al., 8 Dec 2025).
  • Protein Interfaces: ROC-AUC for protein-protein binding prediction reached $0.93$, surpassing prior graph and geometry-based methods, with interpretable visual heatmaps deriving from tensor-weighted attention scores (Yang et al., 5 Jan 2026).
  • Handwritten Symbol Parsing: Node and edge accuracy exceeds 94%94\%/97%97\% respectively, with global graph modeling further boosting expression-level recognition (Xie et al., 2024).
  • Code Vulnerability Detection: Dual-channel directional propagation and edge-type attentive pooling improved F1 by 6.98%6.98\% and accuracy by 3.9%3.9\% over prior GNNs (Haque et al., 22 Jul 2025).

By associating directional tensor propagation with physical phenomena (atomic displacement, inter-residue orientation, symbol relation), models achieve not only higher performance but correspondence between learned representations and scientific structures.

6. Limitations, Scalability, and Future Directions

Limitations of directional tensor propagation frameworks include restriction to domain-specific feature sets (e.g., only carbides in (Mangalassery et al., 8 Dec 2025)), omission of long-range (e.g., multipole, global pooling) interactions, and challenges in scaling to very large graphs. Most methods scale linearly in node and edge count, with edge-aware attention and tensor updates incurring parameter and memory overhead proportional to tensor dimensionality and attention head count.

A plausible implication is the need for future work in:

  • Broadening chemistry domains (e.g., retraining on oxides, nitrides).
  • Integrating global graph pooling and hybrid convolutional–attentional filters for long-range interaction modeling.
  • Extending tensor propagation and equivariant normalization to arbitrary scientific graphs and high-dimensional relational structures.

Overall, directional tensor propagation in edge-aware GNNs provides a theoretically rigorous and empirically validated mechanism to transmit geometric and physical information along graph edges, respecting symmetry and enhancing representational capacity across computational science, biophysics, symbolic AI, and code analysis domains (Mangalassery et al., 8 Dec 2025Yang et al., 5 Jan 2026Xie et al., 2024Haque et al., 22 Jul 2025).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Directional Tensor Propagation.