LLMTunableChat
DeepSeek Coder V2 Instruct is a 236-billion-parameter open-source Mixture-of-Experts (MoE) code language model with 21 billion active parameters, developed by DeepSeek AI. Fine-tuned for instruction following, it achieves performance comparable to GPT4-Turbo on code-specific tasks. Pre-trained on an additional 6 trillion tokens, it enhances coding and mathematical reasoning capabilities, supports 338 programming languages, and extends context length from 16K to 128K while maintaining strong general language performance.
DeepSeek Coder V2 Instruct can be fine-tuned on your data to create a model with better response quality. Fireworks uses low-rank adaptation (LoRA) to train a model that can be served efficiently at inference time.
See the Fine-tuning guide for details.
On-demand deployments allow you to use DeepSeek Coder V2 Instruct on dedicated GPUs with Fireworks' high-performance serving stack with high reliability and no rate limits.
See the On-demand deployments guide for details.