Papers
Topics
Authors
Recent
Search
2000 character limit reached

G-Refer: Graph Retrieval-Augmented Large Language Model for Explainable Recommendation

Published 18 Feb 2025 in cs.IR and cs.CL | (2502.12586v1)

Abstract: Explainable recommendation has demonstrated significant advantages in informing users about the logic behind recommendations, thereby increasing system transparency, effectiveness, and trustworthiness. To provide personalized and interpretable explanations, existing works often combine the generation capabilities of LLMs with collaborative filtering (CF) information. CF information extracted from the user-item interaction graph captures the user behaviors and preferences, which is crucial for providing informative explanations. However, due to the complexity of graph structure, effectively extracting the CF information from graphs still remains a challenge. Moreover, existing methods often struggle with the integration of extracted CF information with LLMs due to its implicit representation and the modality gap between graph structures and natural language explanations. To address these challenges, we propose G-Refer, a framework using graph retrieval-augmented LLMs for explainable recommendation. Specifically, we first employ a hybrid graph retrieval mechanism to retrieve explicit CF signals from both structural and semantic perspectives. The retrieved CF information is explicitly formulated as human-understandable text by the proposed graph translation and accounts for the explanations generated by LLMs. To bridge the modality gap, we introduce knowledge pruning and retrieval-augmented fine-tuning to enhance the ability of LLMs to process and utilize the retrieved CF information to generate explanations. Extensive experiments show that G-Refer achieves superior performance compared with existing methods in both explainability and stability. Codes and data are available at https://github.com/Yuhan1i/G-Refer.

Summary

  • The paper introduces G-Refer, a novel framework that combines hybrid graph retrieval (path and node level) with Large Language Models to generate personalized and explainable recommendations by effectively integrating structural and semantic collaborative filtering information.
  • G-Refer employs a Knowledge Pruning strategy to filter irrelevant data for efficiency and utilizes Retrieval-Augmented Fine-Tuning (RAFT) with an adapter-free approach to integrate retrieved graph knowledge effectively into the LLM during training.
  • Experimental evaluations show G-Refer achieves superior performance, especially in recall, on datasets like Yelp and Google-reviews, confirming the benefits of its hybrid retrieval and knowledge pruning components.

The paper "G-Refer: Graph Retrieval-Augmented LLM for Explainable Recommendation" addresses the challenge of providing personalized and interpretable explanations in recommendation systems by integrating collaborations between LLMs and user-item interaction graphs. Traditional methods often face challenges in effectively utilizing the complex structures of graphs to extract Collaborative Filtering (CF) information for generating explanations due to modality discrepancies between structured graph data and natural language.

The proposed G-Refer framework aims to overcome these hurdles using a novel pipeline consisting of three core components:

  1. Hybrid Graph Retrieval Mechanism: G-Refer introduces a comprehensive retrieval system combining multi-granularity approaches, involving both path-level and node-level retrievers. The path-level retriever utilizes Graph Neural Networks (GNNs) to identify influential paths associated with user-item interactions, effectively capturing structural CF signals crucial for explaining recommendations. Conversely, the node-level retriever leverages semantic similarities between nodes to capture semantic CF information in user and item profiles. Each of these captured signals is then translated into human-readable text using graph translation techniques, which is essential for enhancing the interpretability of LLM-generated explanations.
  2. Knowledge Pruning Strategy: Recognizing that not all training samples require additional CF signals for effective explanations, knowledge pruning is implemented to eliminate less relevant data, thereby prioritizing training samples that necessitate external knowledge. This filtering process enhances the model’s ability to capitalize on CF information efficiently, while concurrently reducing computational costs due to a smaller training dataset.
  3. Retrieval-Augmented Fine-Tuning (RAFT): This phase employs lightweight fine-tuning of LLMs with an adapter-free strategy, focusing on the integration of retrieved graph knowledge at the training stage. By fine-tuning with diverse, real-time retrieved CF information, G-Refer bridges the modality gap, enhancing the LLM’s capacity to understand and effectively utilize structured information in generating accurate and personalized recommendations. Specifically, RAFT ensures that LLMs can leverage its internal understanding alongside external CF signals, maximizing explanation completeness while minimizing context misinterpretation or hallucination.

The experimental evaluation of G-Refer demonstrates its superior performance compared to contemporary state-of-the-art methods. The model outperforms others in key performance measures, especially in producing explanations with higher recall, indicating enhanced utilization and incorporation of retrieved CF information. Such methodological advancements make G-Refer particularly effective in datasets like Yelp and Google-reviews where both structural and semantic CF cues are pivotal.

Additionally, the ablation studies emphasize the importance of each component, specifically showcasing the complementary benefits of integrating node and path-level information, as well as the indispensable role of efficient knowledge pruning. Moreover, the scaling tests based on variants of LLMs further reveal that substantial improvements can be achieved even without significantly increasing computational parameters, marking an advancement towards computationally efficient graph-enhanced recommendation systems.

Overall, G-Refer represents a strategic integration of hybrid graph retrieval with LLMs in the field of recommendation systems, facilitating not just effective recommendations but also transparency that is vital for user trust and system reliability. As the framework can be extended across various domains requiring explainability, this study provides meaningful insights into future directions of graph-based and language-model-driven recommendation systems.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 7 tweets with 0 likes about this paper.