DeepSeek R1 0528, an updated version of the state-of-the-art DeepSeek R1 model, is now available. Try it now!

DeepSeek Coder V2 Lite Base

an open-source Mixture-of-Experts (MoE) code language model that achieves performance comparable to GPT4-Turbo in code-specific tasks. Specifically, DeepSeek-Coder-V2 is further pre-trained from an intermediate checkpoint of DeepSeek-V2 with additional 6 trillion tokens. DeepSeek-Coder-V2 expands its support for programming languages from 86 to 338, while extending the context length from 16K to 128K.

Try Model

Fireworks Features

Fine-tuning

DeepSeek Coder V2 Lite Base can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model

Learn More

On-demand Deployment

On-demand deployments give you dedicated GPUs for DeepSeek Coder V2 Lite Base using Fireworks' reliable, high-performance system with no rate limits.

Learn More

Info

Model Type

LLM

Context Length

163840

Fine-Tuning

Available

Pricing Per 1M Tokens

$0.2