The Deepseek Coder 7B Base v1.5 LLM is pre-trained from Deepseek 7B on 2T tokens by employing a window size of 4K and next token prediction objective.
Fine-tuningDocs | DeepSeek Coder 7B Base v1.5 can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model |
On-demand DeploymentDocs | On-demand deployments allow you to use DeepSeek Coder 7B Base v1.5 on dedicated GPUs with Fireworks' high-performance serving stack with high reliability and no rate limits. |