Directional Tensor Propagation in GNNs
- Directional tensor propagation is a method that embeds directional vectors and tensorial features into GNNs to ensure equivariant, symmetry-aware message passing.
- It integrates geometric descriptors directly into edge-aware attention mechanisms to improve feature aggregation in applications like atomic and protein modeling.
- Empirical results show that incorporating directional tensors enhances prediction accuracy and interpretability across materials science, biomolecular mapping, and code analysis.
Directional tensor propagation refers to the explicit modeling and transmission of directional information—encoded as vectors, tensors, or geometric descriptors—within edge-aware graph neural networks (GNNs) employing attention mechanisms. Recent edge-aware attention frameworks systematically integrate directional or tensorial features into both attention scoring and feature aggregation steps, enabling equivariant propagation of features that respect rotation, translation, and local structural geometry. Directional tensor propagation thus augments scalar node and edge features with geometric context, enhancing representation power and symmetry compliance in applications ranging from atomic structure prediction (Mangalassery et al., 8 Dec 2025), biomolecular interface mapping (Yang et al., 5 Jan 2026), to symbolic mathematical parsing and code graph analysis.
1. Theoretical Foundations
Modern edge-aware GNN architectures generalize classical message-passing networks by incorporating both scalar and tensorial edge descriptors into the propagation rule. Let denote node features and edge features (potentially multidimensional, containing direction vectors or displacement tensors). Directional information is encoded using local displacement vectors , normalized direction vectors , or higher-order tensor features as in (Mangalassery et al., 8 Dec 2025) (materials) and (Yang et al., 5 Jan 2026) (proteins).
Directional tensor propagation operates within the attention mechanism, where score coefficients and aggregated messages explicitly combine node, edge, and geometric features: The message-passing update then aggregates directional information: where typically sums attention-weighted neighbor contributions, including geometrically updated edge features.
This framework ensures that directional (vector or tensor) features propagate equivariantly under rotations, and scalar features remain invariant, guaranteeing physical and geometric symmetry compliance essential for scientific applications (Mangalassery et al., 8 Dec 2025).
2. Edge-Aware Attention and Tensorial Features
Edge-aware attention mechanisms extend GAT-style layers by incorporating multidimensional edge features—often including directional vectors, local gradients, bond angles, and displacement tensors—directly into the attention score computation and message aggregation (Mangalassery et al., 8 Dec 2025Chen et al., 2021Yang et al., 5 Jan 2026). For instance:
- In (Mangalassery et al., 8 Dec 2025), directional vectors and normalized vectors are concatenated with scalar edge and node descriptors.
- In protein binding site prediction, directional propagation is realized by maintaining an auxiliary tensor per node, sequentially updated by attention-weighted sums over directional edge vectors (Yang et al., 5 Jan 2026): where encodes both direction and magnitude.
Such propagation enables models to capture not only "who" interacts, but "how" (direction, orientation, geometry) those interactions occur.
3. Symmetry, Invariance, and Equivariance
Directional tensor propagation is critical for enforcing invariance (under translation, rotation) or equivariance (directional information transforms appropriately under geometric operations). Edge descriptors such as (distance), (elemental difference), and (bond angle) are invariant to rigid motion, while vectorial features and rotate equivariantly (Mangalassery et al., 8 Dec 2025). In physics-informed GNNs, this distinction ensures predictions depend only on relative geometry—never on absolute coordinates—yielding models that generalize across orientations and configurations.
Multi-head tensor propagation further enables the encoding of local spatial patterns and multi-scale directional phenomena, as required in complex domains such as atomic relaxations, biomolecular interfaces, and spatial graph modeling (Mangalassery et al., 8 Dec 2025Yang et al., 5 Jan 2026).
4. Architectural Manifestations Across Domains
Directional tensor propagation modules appear in diverse edge-aware GNN architectures:
- Edge-Aware GAT for Materials: Physicochemical and geometric node/edge features are projected and passed through multi-head attention; edge features are updated via MLPs to incorporate directional tensors (Mangalassery et al., 8 Dec 2025).
- Protein Binding Site Prediction: Atomic embeddings include directional context, propagated layer-wise using tensor updates. Residue-level pooling attentively merges atomic tensors to inform downstream classification (Yang et al., 5 Jan 2026).
- Handwritten Expression Recognition: Edge-weighted Graph Attention Mechanisms concatenate and pool directional (stroke-relative) vectors for symbol and relation prediction (Xie et al., 2024).
- Code Analysis: Dual semantic/structural node embeddings and edge-type tensors drive attention in code property graphs, supporting explicit distinction among program relations (Haque et al., 22 Jul 2025).
Architectural choices for directional tensor propagation involve the dimensionality of edge tensors, normalization protocols preserving directional information, and parallel aggregation of scalar and equivariant channels.
5. Empirical Performance and Physical Interpretation
Integrating directional tensor propagation improves both statistical accuracy and physical interpretability:
- Atomic Relaxation: Edge-aware GATs achieved MAE of and RMSE $0.17$– in position prediction, with near-isotropic directional errors (Mangalassery et al., 8 Dec 2025).
- Protein Interfaces: ROC-AUC for protein-protein binding prediction reached $0.93$, surpassing prior graph and geometry-based methods, with interpretable visual heatmaps deriving from tensor-weighted attention scores (Yang et al., 5 Jan 2026).
- Handwritten Symbol Parsing: Node and edge accuracy exceeds / respectively, with global graph modeling further boosting expression-level recognition (Xie et al., 2024).
- Code Vulnerability Detection: Dual-channel directional propagation and edge-type attentive pooling improved F1 by and accuracy by over prior GNNs (Haque et al., 22 Jul 2025).
By associating directional tensor propagation with physical phenomena (atomic displacement, inter-residue orientation, symbol relation), models achieve not only higher performance but correspondence between learned representations and scientific structures.
6. Limitations, Scalability, and Future Directions
Limitations of directional tensor propagation frameworks include restriction to domain-specific feature sets (e.g., only carbides in (Mangalassery et al., 8 Dec 2025)), omission of long-range (e.g., multipole, global pooling) interactions, and challenges in scaling to very large graphs. Most methods scale linearly in node and edge count, with edge-aware attention and tensor updates incurring parameter and memory overhead proportional to tensor dimensionality and attention head count.
A plausible implication is the need for future work in:
- Broadening chemistry domains (e.g., retraining on oxides, nitrides).
- Integrating global graph pooling and hybrid convolutional–attentional filters for long-range interaction modeling.
- Extending tensor propagation and equivariant normalization to arbitrary scientific graphs and high-dimensional relational structures.
Overall, directional tensor propagation in edge-aware GNNs provides a theoretically rigorous and empirically validated mechanism to transmit geometric and physical information along graph edges, respecting symmetry and enhancing representational capacity across computational science, biophysics, symbolic AI, and code analysis domains (Mangalassery et al., 8 Dec 2025Yang et al., 5 Jan 2026Xie et al., 2024Haque et al., 22 Jul 2025).