DeepSeek Coder V2 Lite Instruct is a 16-billion-parameter open-source Mixture-of-Experts (MoE) code language model with 2.4 billion active parameters, developed by DeepSeek AI. Fine-tuned for instruction following, it achieves performance comparable to GPT4-Turbo on code-specific tasks. Pre-trained on an additional 6 trillion tokens, it enhances coding and mathematical reasoning capabilities, supports 338 programming languages, and extends context length from 16K to 128K while maintaining strong general language performance.
Fine-tuningDocs | DeepSeek Coder V2 Lite Instruct can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model |
On-demand DeploymentDocs | On-demand deployments give you dedicated GPUs for DeepSeek Coder V2 Lite Instruct using Fireworks' reliable, high-performance system with no rate limits. |