Papers
Topics
Authors
Recent
Search
2000 character limit reached

Neighborhood Overlap-Aware Graph Neural Networks

Updated 5 January 2026
  • The paper introduces NOA-GNNs that integrate explicit neighborhood overlap modeling to overcome traditional GNN limitations in distinguishing local subgraph structures.
  • It details methodologies like NEAR, Neo-GNN, NO-GAT, and NO-HGNN that quantify 1-hop and multi-hop overlaps to enhance tasks such as graph classification and link prediction.
  • Empirical evaluations demonstrate significant performance gains, with improvements up to 1.59% in accuracy, validating the benefits of incorporating structural overlap features.

Neighborhood Overlap-aware Graph Neural Networks (NOA-GNNs) are a class of graph representation learning architectures that augment classical message-passing neural networks by explicitly modeling the patterns of overlap between node neighborhoods. These methods extract and utilize the structural information contained in shared, connected, or overlaid neighborhoods to improve expressivity and accuracy, particularly in tasks where latent local subgraph configuration is critical and classic feature aggregation is insufficient.

1. Motivation and Theoretical Foundations

Most traditional GNNs (e.g., Graph Convolutional Networks, Graph Isomorphism Networks) rely on permutations of message passing that aggregate feature information from the 1-hop neighbors of each node. However, these methods treat the neighborhood as a multiset and neglect the edge structure among a node’s neighbors—meaning they cannot distinguish between local structures such as cliques, stars, or cycles if their node feature distributions are identical. This limitation constrains the expressivity of the model; it causes GNNs like GIN to be provably as expressive as the 1-dimensional Weisfeiler-Lehman graph isomorphism test and incapable of distinguishing families of non-isomorphic graphs sharing the same set of node features and neighbor multisets but differing in their neighborhood overlap topology (Kim et al., 2019).

Neighborhood overlap-aware techniques address this by modeling either the number of shared neighbors (the classic overlap score), higher-order multi-hop overlaps, or explicit edge patterns among the 1-hop and multi-hop neighbors.

2. Definitions and Overlap Quantification

Neighborhood overlap can be defined at multiple granularities:

  • 1-hop neighborhood overlap: For a pair of nodes ii and jj, the overlap score is ∣N(i)∩N(j)∣|\mathcal{N}(i)\cap\mathcal{N}(j)|, where N(i)={k∣Aik=1}\mathcal{N}(i) = \{ k \mid A_{ik}=1 \} is the set of immediate neighbors. This is efficiently computed as (A2)ij(A^2)_{ij}, where AA is the adjacency matrix (Yun et al., 2022, Wei et al., 2024).
  • Multi-hop neighborhood overlap: Defines the overlap over â„“\ell-hop neighborhoods. The â„“\ell-hop overlap is ∣N(â„“)(i)∩N(â„“)(j)∣|\mathcal{N}^{(\ell)}(i)\cap\mathcal{N}^{(\ell)}(j)|, summing over 1{(Aâ„“)ik>0}â‹…1{(Aâ„“)jk>0}\mathbf{1}\{(A^{\ell})_{ik}>0\}\cdot \mathbf{1}\{(A^{\ell})_{jk}>0\} for all kk.
  • Neighborhood edge patterns: For a node vv, let ENv={(u,z):u,z∈Nv,(u,z)∈E}E_{N_v} = \{ (u, z) : u, z \in N_v, (u, z) \in E \} denote the set of edges among vv's neighbors. Aggregating over these patterns can encode motifs—triangles, chords, or more complex structures (Kim et al., 2019).
  • Normalization and weighting: Structural features are further normalized (e.g., via softmax over neighbors) or weighted (e.g., by decay parameters, as in Neo-GNNs) depending on architectural requirements (Wang, 7 Jun 2025, Wei et al., 2024).

These quantifications provide the foundation for the operator design in NOA-GNNs.

3. Representative Approaches

Several architectural instantiations of neighborhood overlap awareness have emerged:

3.1 NEAR: Neighborhood Edge AggregatoR

NEAR introduces a local aggregation operator that explicitly inputs the feature pairs of connected neighbor pairs. For a node vv, the NEAR operator aggregates g(hu,hz)g(h_u, h_z) for each (u,z)∈ENv(u,z) \in E_{N_v} where gg ranges over counting (NEAR-c), linear addition (NEAR-e), element-wise max (NEAR-m), or Hadamard product (NEAR-h). This output is concatenated with the conventional sum aggregator of GIN and processed through a two-layer MLP as the layer transition. NEAR thus encodes the induced subgraph on NvN_v at each layer (Kim et al., 2019).

3.2 Neo-GNNs: Neighborhood Overlap-aware Graph Neural Networks

Neo-GNNs generalize classic link prediction heuristics (such as Common Neighbors and Adamic–Adar) into a learnable structural feature generator. For each node, a scalar xkstructx_k^{\text{struct}} is generated via an MLP from its adjacency row and those of its neighbors. These are then aggregated over multi-hop neighborhoods with a decay hyperparameter, yielding embeddings whose inner products encode overlap-rich structural similarity. The Neo-GNN output is a convex combination of this structural score and conventional GNN feature scores (Yun et al., 2022).

3.3 NO-GAT: Neighbor Overlay-Induced Graph Attention Network

NO-GAT constructs an overlay embedding ZZ for each node by applying learned MLPs to the adjacency’s columns and their neighbors. The similarity Cij=ZiTZjC_{ij} = Z_i^T Z_j provides a learned, potentially high-dimensional generalization of classic overlap between neighborhoods. Attention coefficients in the propagation layer are a convex combination of classic feature-based attention and this structural correlation, with the weights learned during training (Wei et al., 2024).

3.4 NO-HGNN: Neighborhood Overlap-Aware High-Order Graph Neural Network

NO-HGNN extends the neighborhood overlap concept to dynamic graphs and high-order message passing. It forms a multi-hop neighborhood overlap tensor BB across time, extracts node-level structural features oito_{it} via MLPs, computes pairwise overlap correlations pijt=oitTojtp_{ijt} = o_{it}^T o_{jt} (softmax-normalized among neighbors), and injects these as the aggregation weights in a tensor-product-based high-order message passing scheme (Wang, 7 Jun 2025).

Table: Distinct NOA-GNN Architectures

Model Overlap Mechanism Main Task
NEAR 1-hop neighborhood edges (motifs) Graph Classification
Neo-GNN Multi-hop overlaps (adjacency rows) Link Prediction
NO-GAT Overlay embeddings (adjacency lifts) Node Classification
NO-HGNN Temporal & high-order overlaps Dynamic Link Prediction

4. Architectural Integration and Inference Procedures

Common to all NOA-GNNs is the integration of overlap-derived signals with standard feature-based aggregators. Methods differ in how structural features are constructed and injected:

  • Layer update rules: Most NOA-GNNs concatenate or merge overlap-based terms with classic neighbor aggregation. In NEAR, this is literal concatenation at each GIN-like layer; in Neo-GNNs and NO-GAT, convex combinations or calculated attention weights are used (Kim et al., 2019, Yun et al., 2022, Wei et al., 2024).
  • Multi-hop and temporal aspects: NOA-GNNs frequently account for multi-hop overlap by raising the adjacency to higher powers and incorporating decay (as in Neo-GNNs), or by forming a sum of multi-hop adjacency tensors in dynamic settings (NO-HGNN) (Yun et al., 2022, Wang, 7 Jun 2025).
  • Learnable weighting: Unlike static heuristics, these systems optimize the weighting and transformation of overlap signals (via MLPs, scalars like α\alpha and β\beta, or softmax-normalized coefficients), allowing them to adapt to the graph domain and task (Yun et al., 2022, Wei et al., 2024).
  • Inference: In all cases, edge- or node-level predictions are made by combining outputs from both the feature and structural branches (typically by inner products of embeddings or further trainable MLPs).

5. Expressiveness and Theoretical Properties

Neighborhood overlap-aware enhancements are strictly more expressive than classical message-passing schemes based on the 1-WL test. NEAR, for instance, captures all information about triangles and 2-paths in a node's neighborhood—distinguishing graphs that ordinary 1-hop aggregators collapse together—but does not have full 3-WL equivalence (Kim et al., 2019). Neo-GNNs provably subsume heuristics such as Common Neighbors, Adamic–Adar, or Resource Allocation by appropriate specialization of their learnable transforms, and interpolate between purely structure-based and feature-based representations as optimized on the target dataset (Yun et al., 2022).

A plausible implication is that for tasks where the relative arrangement of neighboring nodes is informative—such as several link prediction or structure-sensitive classification problems—NOA-GNNs offer a principled way to overcome expressivity limits without incurring the computational intractability of the full 3-WL test or subgraph enumeration.

6. Computational Complexity and Implementation

The computational overhead for neighborhood overlap-aware modules depends on how overlap is modeled:

  • Simple overlap counts (NEAR-c, -e) can be computed inline with the neighbor aggregation in O(∣Nv∣)O(|N_v|) per node. Motif- or pairwise-feature-based variants incur O(∣Nv∣2)O(|N_v|^2) but can be alleviated by sparse adjacency operations or sampling (Kim et al., 2019).
  • Multi-hop overlap computation requires repeated sparse matrix multiplications (e.g., Aâ„“A^\ell), but depth LL is typically very small (1–3) and modern libraries enable efficient handling. NO-GAT complexity is linear in the edge count and overlay feature dimension (Wei et al., 2024).
  • Temporal dynamics (NO-HGNN) add further tensor contractions but leverage batched matrix operations and low K,LK,L to remain tractable in practice (Wang, 7 Jun 2025).

7. Empirical Evaluation and Benchmarks

Empirical investigation consistently confirms that NOA-GNNs outperform their vanilla and heuristic-based counterparts on a wide range of structural tasks.

  • NEAR yields substantial gains in graph classification benchmarks and perfectly solves distinguishing tasks built from its motivating counterexamples, while classic GIN fails (Kim et al., 2019).
  • Neo-GNNs achieve state-of-the-art link prediction accuracy across OGB benchmarks, eclipsing both GNN baselines and heuristic methods—especially where multi-hop overlaps are informative (e.g., OGB-Collab’s correlation ≈0.87\approx 0.87) (Yun et al., 2022).
  • NO-GAT consistently surpasses previous state-of-the-art in semi-supervised node classification on citation, web, and actor datasets, realizing accuracy gains of 1–3% on hard-to-classify graphs (Wei et al., 2024).
  • NO-HGNN outperforms dynamic graph baselines on benchmark temporal link prediction datasets, with improvements of up to 1.59% in accuracy and 1.3% in F1-score (Wang, 7 Jun 2025).

The overall trend demonstrates that the explicit modeling of neighborhood overlap enables NOA-GNNs to adapt to and exploit structural information critical for accurate representation, especially in real-world graphs where topological configuration contains latent signal.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Neighborhood Overlap-aware Graph Neural Networks.