DeepSeek R1 0528, an updated version of the state-of-the-art DeepSeek R1 model, is now available. Try it now!

DeepSeek Coder V2 Instruct

DeepSeek Coder V2 Instruct is a 236-billion-parameter open-source Mixture-of-Experts (MoE) code language model with 21 billion active parameters, developed by DeepSeek AI. Fine-tuned for instruction following, it achieves performance comparable to GPT4-Turbo on code-specific tasks. Pre-trained on an additional 6 trillion tokens, it enhances coding and mathematical reasoning capabilities, supports 338 programming languages, and extends context length from 16K to 128K while maintaining strong general language performance.

Try Model

Fireworks Features

Fine-tuning

DeepSeek Coder V2 Instruct can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model

Learn More

On-demand Deployment

On-demand deployments give you dedicated GPUs for DeepSeek Coder V2 Instruct using Fireworks' reliable, high-performance system with no rate limits.

Learn More

Info

Model Type

LLM

Context Length

131072

Fine-Tuning

Available

Pricing Per 1M Tokens

$0.9