LLM
Deepseek-Coder-7B-Instruct-v1.5 is pre-trained from Deepseek-LLM 7B on 2T tokens by employing a window size of 4K and next token prediction objective, and then fine-tuned on 2B tokens of instruction data.
On-demand deployments allow you to use Deepseek Coder 7B Instruct v1.5 on dedicated GPUs with Fireworks' high-performance serving stack with high reliability and no rate limits.
See the On-demand deployments guide for details.