OpenAI gpt-oss-120b & 20b, open weight models designed for reasoning, agentic tasks, and versatile developer use cases is now available! Try Now

Deepseek Logo Mark

Deepseek V3 03-24

A strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token from Deepseek. Updated checkpoint. Note that fine-tuning for this model is only available upon request through contacting fireworks at https://fireworks.ai/company/contact-us.

Try Model

Fireworks Features

Fine-tuning

Deepseek V3 03-24 can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model

Learn More

Serverless

Immediately run model on pre-configured GPUs and pay-per-token

Learn More

On-demand Deployment

On-demand deployments give you dedicated GPUs for Deepseek V3 03-24 using Fireworks' reliable, high-performance system with no rate limits.

Learn More

Info

Provider

Deepseek

Model Type

LLM

Context Length

163840

Serverless

Available

Fine-Tuning

Available

Pricing Per 1M Tokens

$0.9