A a strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token from Deepseek. Note that fine-tuning for this model is only available through contacting fireworks at https://fireworks.ai/company/contact-us.
DeepSeek V3 can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model
Learn MoreImmediately run model on pre-configured GPUs and pay-per-token
Learn MoreOn-demand deployments give you dedicated GPUs for DeepSeek V3 using Fireworks' reliable, high-performance system with no rate limits.
Learn MoreDeepSeek V3 is a Mixture-of-Experts (MoE) large language model developed by DeepSeek AI. It has 671B total parameters, with 37B activated per token during inference. The model uses Multi-head Latent Attention (MLA) and a Multi-Token Prediction (MTP) objective to improve inference speed and training efficiency.
Deepseek
131072
Available
Available
$0.9