A strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token from Deepseek. Updated checkpoint. Note that fine-tuning for this model is only available upon request through contacting fireworks at https://fireworks.ai/company/contact-us.
Fine-tuningDocs | Deepseek V3 03-24 can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model |
ServerlessDocs | Immediately run model on pre-configured GPUs and pay-per-token |
On-demand DeploymentDocs | On-demand deployments give you dedicated GPUs for Deepseek V3 03-24 using Fireworks' reliable, high-performance system with no rate limits. |
Run queries immediately, pay only for usage