DeepSeek-Prover-V2, an open-source large language model designed for formal theorem proving in Lean 4, with initialization data collected through a recursive theorem proving pipeline powered by DeepSeek-V3.
Fine-tuningDocs | DeepSeek Prover V2 can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model |
On-demand DeploymentDocs | On-demand deployments give you dedicated GPUs for DeepSeek Prover V2 using Fireworks' reliable, high-performance system with no rate limits. |
DeepSeek Prover V2 is an open-source large-language model designed for automated formal-theorem proving in Lean 4, created by DeepSeek AI. It is trained via a recursive proof-generation pipeline powered by DeepSeek-V3, integrating informal reasoning with formal proof construction.
DeepSeek Prover V2 is optimized for:
Fireworks hosts DeepSeek Prover V2 with a maximum context length of 163,840 tokens.
DeepSeek Prover V2 has 671 billion parameters.
Yes. Fireworks offers LoRA fine-tuning for DeepSeek Prover V2.
Code is released under MIT license. Model weights are governed by a "Model License / Model Agreement" from DeepSeek AI.