Papers
Topics
Authors
Recent
Search
2000 character limit reached

Search-R1: Training LLMs to Reason and Leverage Search Engines with Reinforcement Learning

Published 12 Mar 2025 in cs.CL, cs.AI, and cs.IR | (2503.09516v5)

Abstract: Efficiently acquiring external knowledge and up-to-date information is essential for effective reasoning and text generation in LLMs. Prompting advanced LLMs with reasoning capabilities to use search engines during inference is often suboptimal, as the LLM might not fully possess the capability on how to interact optimally with the search engine. This paper introduces Search-R1, an extension of reinforcement learning (RL) for reasoning frameworks where the LLM learns to autonomously generate (multiple) search queries during step-by-step reasoning with real-time retrieval. Search-R1 optimizes LLM reasoning trajectories with multi-turn search interactions, leveraging retrieved token masking for stable RL training and a simple outcome-based reward function. Experiments on seven question-answering datasets show that Search-R1 improves performance by 41% (Qwen2.5-7B) and 20% (Qwen2.5-3B) over various RAG baselines under the same setting. This paper further provides empirical insights into RL optimization methods, LLM choices, and response length dynamics in retrieval-augmented reasoning. The code and model checkpoints are available at https://github.com/PeterGriffinJin/Search-R1.

Summary

  • The paper introduces Search-R1, which integrates reinforcement learning with search engine interactions to enable iterative reasoning.
  • It uses retrieved token loss masking and outcome-based rewards to stabilize training and improve accuracy on QA tasks by up to 24%.
  • Experimental results on multiple datasets demonstrate that both base and instruct models benefit, achieving significant performance gains.

Search-R1: Training LLMs to Reason and Leverage Search Engines with Reinforcement Learning

Introduction

The paper introduces Search-R1, a reinforcement learning (RL) framework designed to enhance LLMs by enabling effective interaction with search engines. This is an effort to address the challenges LLMs face in integrating up-to-date external knowledge, particularly crucial for reasoning and text generation tasks.

Approach

Reinforcement Learning Integration

Search-R1 adopts reinforcement learning techniques to optimize the reasoning trajectories of LLMs, incorporating search engine interactions within the decision-making process. By modeling search engines as part of the RL environment, Search-R1 enables multi-turn query generation and retrieval, essential for complex problem-solving.

Key Innovations

  1. Interleaved Reasoning and Search: Search-R1 allows LLMs to perform iterative reasoning in conjunction with search engine calls, adapting its retrieval strategies dynamically based on the complexity of the tasks.
  2. Retrieved Token Masking: The framework employs loss masking for retrieved tokens during optimization to stabilize RL training, preventing unintended learning dynamics from influencing performance.
  3. Outcome-Based Reward Function: The reward design is straightforward, relying on outcome-based metrics rather than complex intermediate reward structures, thereby simplifying the training process.

Experimental Setup

The efficacy of Search-R1 is validated through experiments on seven question-answering datasets. The framework shows significant performance improvements of 24% for Qwen2.5-7B and 20% for Qwen2.5-3B over existing retrieval-augmented generation (RAG) methods.

Results and Analysis

Performance Metrics

Search-R1 demonstrated superior performance across both in-domain and out-of-domain datasets compared to multiple baselines including Chain-of-Thought (CoT) reasoning and RAG:

  • Qwen2.5-7B: Achieved an average improvement of 24% relative to baseline methods.
  • Qwen2.5-3B: Noticed a 20% improvement across evaluation metrics.

Comparative Study of RL Methods

Figure 1

Figure 1

Figure 1

Figure 1

Figure 1: (a) GRPO converges faster but may exhibit instability post-convergence, while PPO maintains stable optimization at a slower rate.

In terms of RL methods, PPO was shown to provide more stable training dynamics compared to GRPO, which, although faster, tended to suffer from post-convergence instability.

Base vs. Instruct Models

Figure 2

Figure 2

Figure 2: Instruction-tuned models show faster convergence, yet both base and instruct models achieve similar final performance.

Both base and instruction-tuned models benefit from Search-R1, with instruct models converging faster but ultimately reaching similar performance levels post-training.

Token Loss Masking Study

Figure 3

Figure 3

Figure 3: Training with retrieved token loss masking greatly stabilizes the learning process and improves final performance outcomes.

The application of token loss masking for retrieved content was critical for stabilizing the optimization process, ensuring only LLM-generated tokens were affected by the policy gradient updates.

Implications and Future Work

This research underscores the potential of RL frameworks like Search-R1 in enhancing LLM functionalities. By integrating a real-time search component, models can effectively augment the knowledge available at inference time, which is particularly crucial for tasks with rapidly evolving information.

Looking ahead, expanding the framework to involve more varied retrieval strategies and exploring more sophisticated reward mechanisms could provide deeper insights. Moreover, integrating multimodal data sources via a similar RL paradigm could broadly extend the applicability of these findings.

Conclusion

Search-R1 represents a significant advance in bridging the gap between internal reasoning capabilities of LLMs and their need for up-to-date external knowledge. It offers a robust framework for augmenting the reasoning processes of LLMs through dynamic interaction with search engines, thereby setting a foundation for future exploration in RL-based retrieval-augmented learning architectures.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 25 tweets with 424 likes about this paper.