Papers
Topics
Authors
Recent
Search
2000 character limit reached

MoLoRec: A Generalizable and Efficient Framework for LLM-Based Recommendation

Published 12 Feb 2025 in cs.IR | (2502.08271v1)

Abstract: LLMs have achieved remarkable success in recent years, owing to their impressive generalization capabilities and rich world knowledge. To capitalize on the potential of using LLMs as recommender systems, mainstream approaches typically focus on two paradigms. The first paradigm designs multi-domain or multi-task instruction data for generalizable recommendation, so as to align LLMs with general recommendation areas and deal with cold-start recommendation. The second paradigm enhances domain-specific recommendation tasks with parameter-efficient fine-tuning techniques, in order to improve models under the warm recommendation scenarios. While most previous works treat these two paradigms separately, we argue that they have complementary advantages, and combining them together would be helpful. To that end, in this paper, we propose a generalizable and efficient LLM-based recommendation framework MoLoRec. Our approach starts by parameter-efficient fine-tuning a domain-general module with general recommendation instruction data, to align LLM with recommendation knowledge. Then, given users' behavior of a specific domain, we construct a domain-specific instruction dataset and apply efficient fine-tuning to the pre-trained LLM. After that, we provide approaches to integrate the above domain-general part and domain-specific part with parameters mixture. Please note that, MoLoRec is efficient with plug and play, as the domain-general module is trained only once, and any domain-specific plug-in can be efficiently merged with only domain-specific fine-tuning. Extensive experiments on multiple datasets under both warm and cold-start recommendation scenarios validate the effectiveness and generality of the proposed MoLoRec.

Summary

  • The paper introduces MoLoRec, a hybrid framework that combines domain-general and domain-specific knowledge using LoRA modules to improve LLM-based recommendation.
  • MoLoRec efficiently integrates knowledge by adaptively merging LoRA adapters through linear arithmetic and entropy minimization at test time.
  • Experiments show MoLoRec significantly improves performance in warm-start and challenging cold-start/cross-domain scenarios while maintaining efficiency.

The paper "MoLoRec: A Generalizable and Efficient Framework for LLM-Based Recommendation" presents a framework for enhancing the capabilities of LLMs in recommender systems by leveraging a hybrid approach that combines both generalizable and domain-specific knowledge. The authors identify two prevalent paradigms in LLM-based recommendation systems: multi-domain instruction tuning for generalizability and parameter-efficient fine-tuning for domain-specific tasks. However, traditional fine-tuning techniques often compromise the generalization abilities of LLMs, leading to suboptimal performance in cold-start and cross-domain scenarios.

Key Contributions:

  1. Hybrid Framework - MoLoRec:
    • The MoLoRec framework integrates both domain-general and domain-specific LoRA (Low-Rank Adapter) modules to leverage the strengths of both paradigms.
    • Initially, domain-general LoRA adapters are fine-tuned using multi-domain recommendation instruction data. These adapters are intended to capture generalizable recommendation knowledge.
    • Next, specific domain knowledge is integrated by fine-tuning domain-specific LoRA adapters using task-specific data.
  2. Efficient Parameter Merging:
    • MoLoRec employs a technique that combines LoRA adapters via linear arithmetic operations in the weight space. This approach allows for efficient integration of general and domain-specific knowledge without compromising inference performance.
    • An adaptive method for merging these weights is proposed, guided by entropy minimization during test time to dynamically adjust the balance between general and domain-specific knowledge.
  3. Empirical Validation:
    • Extensive experiments conducted using multiple datasets demonstrate that MoLoRec achieves significant performance improvements across both warm-start and cold-start scenarios. The framework exhibits strong generalization capabilities while effectively capturing domain-specific user preferences.

Methodological Details:

  • Domain-General & Domain-Specific LoRA Modules:
    • A domain-general module is obtained through instruction tuning on mixed domain datasets, aligning LLMs with recommendation tasks.
    • Domain-specific modules are trained on individual datasets, offering fine-grained user preferences within specific contexts.
  • Adaptive Weight Merging:
    • The weights from general and specific LoRA modules are merged adaptively by minimizing the entropy of predictions, effectively managing the trade-off between generality and specificity.

Novel Model Architecture:

  • By leveraging and combining expertise from general data across domains and domain-specific data, the framework achieves high performance in both ideal intra-domain scenarios and challenging cross-domain, cold-start scenarios.
  • This architecture exemplifies the balance between efficient training (parameter-efficient approaches like LoRA) and robust inference dynamics without additional computational burdens during deployment.

The proposed MoLoRec framework is characterized by its plug-and-play ability, where the general module is reusable across different domains, reducing training costs associated with new domain adaptations. This comprehensive approach offers theoretical insights and pragmatic solutions for LLM-based recommendation systems, making notable strides in addressing the challenges of cold-start and cross-domain recommendations while maintaining system efficiency and performance.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 1 like about this paper.