Papers
Topics
Authors
Recent
Search
2000 character limit reached

DeepSeek-Prover-V1.5: Harnessing Proof Assistant Feedback for Reinforcement Learning and Monte-Carlo Tree Search

Published 15 Aug 2024 in cs.CL, cs.AI, cs.LG, and cs.LO | (2408.08152v1)

Abstract: We introduce DeepSeek-Prover-V1.5, an open-source LLM designed for theorem proving in Lean 4, which enhances DeepSeek-Prover-V1 by optimizing both training and inference processes. Pre-trained on DeepSeekMath-Base with specialization in formal mathematical languages, the model undergoes supervised fine-tuning using an enhanced formal theorem proving dataset derived from DeepSeek-Prover-V1. Further refinement is achieved through reinforcement learning from proof assistant feedback (RLPAF). Beyond the single-pass whole-proof generation approach of DeepSeek-Prover-V1, we propose RMaxTS, a variant of Monte-Carlo tree search that employs an intrinsic-reward-driven exploration strategy to generate diverse proof paths. DeepSeek-Prover-V1.5 demonstrates significant improvements over DeepSeek-Prover-V1, achieving new state-of-the-art results on the test set of the high school level miniF2F benchmark ($63.5\%$) and the undergraduate level ProofNet benchmark ($25.3\%$).

Citations (17)

Summary

  • The paper presents the integration of proof assistant feedback via reinforcement learning with a novel Monte-Carlo Tree Search variant (RMaxTS) to optimize proof search.
  • It employs a multi-stage training approach including pre-training, supervised fine-tuning, and GRPO-based reinforcement learning to enhance formal proof generation in Lean 4.
  • Benchmark evaluations on miniF2F and ProofNet demonstrate state-of-the-art performance, with pass rate improvements up to 25.3% over previous models.

The paper presents DeepSeek-Prover-V1.5, an advanced open-source LLM aimed at optimizing theorem proving capabilities in formal mathematical languages, specifically within the Lean 4 environment. The model builds on its predecessor, DeepSeek-Prover-V1, by integrating enhanced training methodologies such as pre-training, supervised fine-tuning, and reinforcement learning from proof assistant feedback (RLPAF). Notably, the introduction of RMaxTS—a Monte-Carlo tree search variant with intrinsic-reward-driven exploration—distinguishes DeepSeek-Prover-V1.5 by significantly increasing the diversity and efficiency of proof search paths.

Training and Reinforcement Learning Framework

DeepSeek-Prover-V1.5's training involves a multi-stage approach:

  1. Pre-Training: The model is pre-trained on the DeepSeekMath-Base, enhanced with high-quality mathematical datasets focusing on formal languages like Lean, Isabelle, and Metamath, to bolster its mathematical reasoning and formal proof generation capabilities.
  2. Supervised Fine-Tuning: Utilizes an improved dataset augmented with Lean 4 proof codes and interspersed natural language explanations, fostering alignment between formal proofs and human reasoning processes. This stage introduces the truncate-and-resume mechanism, which is pivotal for the tree search within whole-proof generation.
  3. Reinforcement Learning: The GRPO algorithm is employed for RLPAF, leveraging proof assistant feedback for optimizing proof generation performance. This reinforcement learning phase refines the model by aligning its generated proofs with formal verification through binary reward signals derived from the Lean prover. Figure 1

    Figure 1: Overall Framework. DeepSeek-Prover-V1.5 is trained through pre-training, supervised fine-tuning, and reinforcement learning with verification results used as rewards.

Monte-Carlo Tree Search with RMaxTS

The paper introduces a novel proof search tree method, combining whole-proof generation and proof-step generation into a unified framework via a truncate-and-resume mechanism. This is achieved by:

  • Tree Abstraction: Using internal tactic states to abstract proof steps into tree nodes, enhancing the flexibility and accuracy of the proof search strategy.
  • Intrinsic Rewards: Implementing a reward-free exploration strategy through intrinsic motivation to address reward sparsity issues typically encountered in theorem proving tasks.
  • Discounted UCB for Exploration: Adopting a discounted Upper Confidence Bound (UCB) method to better manage non-stationary rewards, thus optimizing exploration in the state-action space more efficiently. Figure 2

    Figure 2: Truncate-and-Resume Mechanism integrates seamlessly into MCTS, showing the detailed process of node expansion and proof generation.

Evaluation and Comparative Performance

The model achieved state-of-the-art results on challenging theorem-proving benchmarks such as miniF2F and ProofNet.

  • miniF2F Benchmark: Achieving a 63.5% pass rate with tree search variants (RMaxTS), surpassing previous approaches with 3.5 percentage points improvement in fully sampled trials.
  • ProofNet Benchmark: Demonstrating strong performance in whole-proof generation settings, achieving a pass rate improvement up to 25.3% compared to predecessors. Figure 3

    Figure 3: Pass rates of models on formal theorem proving benchmarks such as miniF2F and ProofNet.

Conclusion and Future Work

DeepSeek-Prover-V1.5 sets a new standard in formal theorem proving, demonstrating superior performance through integrated reinforcement learning and tree search techniques. Future directions include enhancing the critic model for better temporal credit assignment, which could vastly improve efficiency in large-scale theorem proving tasks. The continued development within this field is aimed at further bridging the gap between formal verification requirements and the inherently human-centric nature of mathematical problem-solving.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 35 tweets with 838 likes about this paper.