DeepSeek Coder V2 Instruct is a 236-billion-parameter open-source Mixture-of-Experts (MoE) code language model with 21 billion active parameters, developed by DeepSeek AI. Fine-tuned for instruction following, it achieves performance comparable to GPT4-Turbo on code-specific tasks. Pre-trained on an additional 6 trillion tokens, it enhances coding and mathematical reasoning capabilities, supports 338 programming languages, and extends context length from 16K to 128K while maintaining strong general language performance.
DeepSeek Coder V2 Instruct can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model
Learn MoreOn-demand deployments give you dedicated GPUs for DeepSeek Coder V2 Instruct using Fireworks' reliable, high-performance system with no rate limits.
Learn MoreLLM
131072
Available
$0.9