Fireworks /
Deepseek R1 Distill Qwen 32b
accounts/fireworks/models/deepseek-r1-distill-qwen-32b
LLMTunableChat
No description provided for this model.
Fine-tuning
Deepseek R1 Distill Qwen 32b can be fine-tuned on your data to create a model with better response quality. Fireworks uses low-rank adaptation (LoRA) to train a model that can be served efficiently at inference time.
See the Fine-tuning guide for details.
On-demand deployments
On-demand deployments allow you to use Deepseek R1 Distill Qwen 32b on dedicated GPUs with Fireworks' high-performance serving stack with high reliability and no rate limits.
See the On-demand deployments guide for details.