Deepseek Coder is composed of a series of code language models, each trained from scratch on 2T tokens, with a composition of 87% code and 13% natural language in both English and Chinese. Deepseek Coder 6.7B Base is a 6.7B parameter model with Multi-Head Attention trained on 2 trillion tokens by employing a window size of 16K and an extra fill-in-the-blank task
DeepSeek Coder 7B Base can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model
Learn MoreOn-demand deployments give you dedicated GPUs for DeepSeek Coder 7B Base using Fireworks' reliable, high-performance system with no rate limits.
Learn MoreDeepseek
4096
Available
$0.2