Fireworks RFT now available! Fine-tune open models that outperform frontier models. Try today

Model Library
/Deepseek/DeepSeek Prover V2
Deepseek Logo Mark

DeepSeek Prover V2

Ready
fireworks/deepseek-prover-v2

    DeepSeek-Prover-V2, an open-source large language model designed for formal theorem proving in Lean 4, with initialization data collected through a recursive theorem proving pipeline powered by DeepSeek-V3.

    DeepSeek Prover V2 API Features

    Fine-tuning

    Docs

    DeepSeek Prover V2 can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model

    On-demand Deployment

    Docs

    On-demand deployments give you dedicated GPUs for DeepSeek Prover V2 using Fireworks' reliable, high-performance system with no rate limits.

    DeepSeek Prover V2 FAQs

    What is DeepSeek Prover V2 and who developed it?

    DeepSeek Prover V2 is an open-source large-language model designed for automated formal-theorem proving in Lean 4, created by DeepSeek AI. It is trained via a recursive proof-generation pipeline powered by DeepSeek-V3, integrating informal reasoning with formal proof construction.

    What applications and use cases does DeepSeek Prover V2 excel at?

    DeepSeek Prover V2 is optimized for:

    • Rigorous mathematical reasoning
    • Solving MiniF2F and PutnamBench problems
    • High-school olympiad style proofs
    • Integration with theorem-proving assistants
    • Agentic research workflows requiring Lean 4 proofs
    What is the maximum context length for DeepSeek Prover V2?

    Fireworks hosts DeepSeek Prover V2 with a maximum context length of 163,840 tokens.

    How many parameters does DeepSeek Prover V2 have?

    DeepSeek Prover V2 has 671 billion parameters.

    Is fine-tuning supported for DeepSeek Prover V2?

    Yes. Fireworks offers LoRA fine-tuning for DeepSeek Prover V2.

    What license governs commercial use of DeepSeek Prover V2?

    Code is released under MIT license. Model weights are governed by a "Model License / Model Agreement" from DeepSeek AI.

    Metadata

    State
    Ready
    Created on
    5/1/2025
    Kind
    Base model
    Provider
    Deepseek
    Hugging Face
    DeepSeek-Prover-V2-671B

    Specification

    Calibrated
    Yes
    Mixture-of-Experts
    Yes
    Parameters
    671.1B

    Supported Functionality

    Fine-tuning
    Supported
    Serverless
    Not supported
    Serverless LoRA
    Not supported
    Context Length
    163.8k tokens
    Function Calling
    Not supported
    Embeddings
    Not supported
    Rerankers
    Not supported
    Support image input
    Not supported