DeepSeek V3.1, a new state of the art open weight models for agentic reasoning, tool use, and coding, is now available! Try Now

Deepseek Logo Mark

DeepSeek Coder 7B Instruct v1.5

Deepseek-Coder-7B-Instruct-v1.5 is pre-trained from Deepseek-LLM 7B on 2T tokens by employing a window size of 4K and next token prediction objective, and then fine-tuned on 2B tokens of instruction data.

Try Model

Fireworks Features

Fine-tuning

DeepSeek Coder 7B Instruct v1.5 can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model

Learn More

On-demand Deployment

On-demand deployments give you dedicated GPUs for DeepSeek Coder 7B Instruct v1.5 using Fireworks' reliable, high-performance system with no rate limits.

Learn More

Info

Provider

Deepseek

Model Type

LLM

Context Length

4096

Fine-Tuning

Available

Pricing Per 1M Tokens

$0.2