Kimi K2 0905, a new state of the art open weight models for agentic reasoning, tool use, and coding, is now available! Try Now

Deepseek Logo Mark

DeepSeek V3

A a strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token from Deepseek. Note that fine-tuning for this model is only available through contacting fireworks at https://fireworks.ai/company/contact-us.

Try Model

Fireworks Features

Fine-tuning

DeepSeek V3 can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model

Learn More

Serverless

Immediately run model on pre-configured GPUs and pay-per-token

Learn More

On-demand Deployment

On-demand deployments give you dedicated GPUs for DeepSeek V3 using Fireworks' reliable, high-performance system with no rate limits.

Learn More

DeepSeek V3 FAQs

DeepSeek V3 is a Mixture-of-Experts (MoE) large language model developed by DeepSeek AI. It has 671B total parameters, with 37B activated per token during inference. The model uses Multi-head Latent Attention (MLA) and a Multi-Token Prediction (MTP) objective to improve inference speed and training efficiency.

Info & Pricing

Provider

Deepseek

Model Type

LLM

Context Length

131072

Serverless

Available

Fine-Tuning

Available

Pricing Per 1M Tokens

$0.9