The Deepseek Coder 7B Base v1.5 LLM is pre-trained from Deepseek 7B on 2T tokens by employing a window size of 4K and next token prediction objective.
Fine-tuningDocs | DeepSeek Coder 7B Base v1.5 can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model |
On-demand DeploymentDocs | On-demand deployments give you dedicated GPUs for DeepSeek Coder 7B Base v1.5 using Fireworks' reliable, high-performance system with no rate limits. |