OpenAI gpt-oss-120b & 20b, open weight models designed for reasoning, agentic tasks, and versatile developer use cases is now available! Try Now

DeepSeek Coder V2 Lite Instruct

DeepSeek Coder V2 Lite Instruct is a 16-billion-parameter open-source Mixture-of-Experts (MoE) code language model with 2.4 billion active parameters, developed by DeepSeek AI. Fine-tuned for instruction following, it achieves performance comparable to GPT4-Turbo on code-specific tasks. Pre-trained on an additional 6 trillion tokens, it enhances coding and mathematical reasoning capabilities, supports 338 programming languages, and extends context length from 16K to 128K while maintaining strong general language performance.

Try Model

Fireworks Features

Fine-tuning

DeepSeek Coder V2 Lite Instruct can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model

Learn More

On-demand Deployment

On-demand deployments give you dedicated GPUs for DeepSeek Coder V2 Lite Instruct using Fireworks' reliable, high-performance system with no rate limits.

Learn More

Info

Model Type

LLM

Context Length

163840

Fine-Tuning

Available

Pricing Per 1M Tokens

$0.5