Papers
Topics
Authors
Recent
Search
2000 character limit reached

Glance for Context: Learning When to Leverage LLMs for Node-Aware GNN-LLM Fusion

Published 12 Oct 2025 in cs.LG | (2510.10849v1)

Abstract: Learning on text-attributed graphs has motivated the use of LLMs for graph learning. However, most fusion strategies are applied uniformly across all nodes and attain only small overall performance gains. We argue this result stems from aggregate metrics that obscure when LLMs provide benefit, inhibiting actionable signals for new strategies. In this work, we reframe LLM-GNN fusion around nodes where GNNs typically falter. We first show that performance can significantly differ between GNNs and LLMs, with each excelling on distinct structural patterns, such as local homophily. To leverage this finding, we propose GLANCE (GNN with LLM Assistance for Neighbor- and Context-aware Embeddings), a framework that invokes an LLM to refine a GNN's prediction. GLANCE employs a lightweight router that, given inexpensive per-node signals, decides whether to query the LLM. Since the LLM calls are non-differentiable, the router is trained with an advantage-based objective that compares the utility of querying the LLM against relying solely on the GNN. Across multiple benchmarks, GLANCE achieves the best performance balance across node subgroups, achieving significant gains on heterophilous nodes (up to $+13\%$) while simultaneously achieving top overall performance. Our findings highlight the value of adaptive, node-aware GNN-LLM architectures, where selectively invoking the LLM enables scalable deployment on large graphs without incurring high computational costs.

Summary

  • The paper introduces GLANCE, which optimizes node predictions by selectively invoking LLMs for complex nodes based on routing features.
  • The framework employs a lightweight MLP router and a hybrid training strategy that combines direct loss minimization with reward-based updates.
  • GLANCE demonstrates superior scalability and performance on diverse TAG datasets, particularly in low-homophily and low-degree scenarios.

Overview of "Glance for Context: Learning When to Leverage LLMs for Node-Aware GNN-LLM Fusion"

The paper introduces GLANCE, a framework designed to efficiently integrate Graph Neural Networks (GNNs) with LLMs for enhanced graph learning on text-attributed graphs (TAGs). GLANCE strategically leverages LLMs only when necessary, optimizing the balance between predictive accuracy and computational cost by targeting nodes difficult for GNNs.

Methodology

GLANCE Framework

GLANCE is structured around three primary components:

  1. Routing Features: GLANCE generates features per node to decide whether to invoke the LLM. The routing features include GNN-derived embeddings, local homophily estimation, degree, and prediction uncertainty. This information helps identify nodes where LLM intervention could refine predictions.
  2. Node Router: A lightweight Multi-layer Perceptron (MLP) determines routing decisions based on the generated features. The top kk nodes most likely to benefit from additional LLM context are selected in each batch, optimizing for computational cost.
  3. LLM Embeddings Generation and Fusion: For routed nodes, their neighborhood context is serialized and processed by a pre-trained LLM to generate embeddings. These embeddings are combined with GNN embeddings using a refiner MLP to produce final node predictions. Figure 1

    Figure 1: GLANCE Overview - shows the step-by-step process by which GLANCE makes use of LLMs alongside GNNs.

Training Strategy

GLANCE utilizes a hybrid training strategy involving both direct loss minimization for node classification and a reward-based approach for optimizing routing decisions. The router weights are updated using a policy gradient-inspired method that favors routes resulting in reduced prediction error, penalized by an embedding cost metric β\beta.

Performance Analysis

Stratified Performance

The empirical analysis demonstrates GLANCE's superior performance across diverse TAG datasets. Notably, GLANCE achieves a balanced accuracy across nodes characterized by varying degrees of homophily, excelling particularly in low-homophily, low-degree contexts where GNNs traditionally underperform. Figure 2

Figure 2: Stratified performance based on homophily and degree shows the variance in model performance where LLM assistance is most effective.

Sensitivity to Routing Budget

Investigating the impact of different routing budgets, GLANCE utilizes larger budgets effectively to improve performance on heterophilous nodes without degrading results on more homophilous nodes, showing flexibility in resource allocation. Figure 3

Figure 3: Performance changes as the routing budget 'K' is varied at test time reflect GLANCE's ability to scale resources effectively.

Ablation Studies

A detailed ablation study of the routing features underscores the importance of each feature, particularly local homophily, in maintaining robust performance. The largest drops occur in heterophilous regions when specific features are omitted, highlighting their critical role. Figure 4

Figure 4: Shows the decay in performance resulting from training without specific routing features, emphasizing their importance.

Scalability and Efficiency

GLANCE is shown to be scalable, integrating LLMs with negligible overhead from GLANCE-specific operations compared to LLM computation. This efficiency ensures that GLANCE is applicable to larger datasets, enabling its application to large-scale TAGs like Arxiv-Year and OGB-Products. Figure 5

Figure 5: Runtime breakdown illustrates the scalability of GLANCE, with negligible overhead from routing and refinement compared to LLM computation.

Conclusion

GLANCE presents a strategic pathway for deploying GNN-LLM hybrids, effectively utilizing LLMs' strengths while avoiding unnecessary computational costs. By employing adaptive routing based on node-specific structural properties, GLANCE achieves improved performance across complex graph scenarios, demonstrating significant promise for future scalable graph learning applications.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.