Skip to main content
Fireworks /

Deepseek R1 Distill Qwen 7B

accounts/fireworks/models/deepseek-r1-distill-qwen-7b

LLMTunableChat

Qwen 7B distilled with reasoning from Deepseek R1

Fine-tuning

Deepseek R1 Distill Qwen 7B can be fine-tuned on your data to create a model with better response quality. Fireworks uses low-rank adaptation (LoRA) to train a model that can be served efficiently at inference time.

See the Fine-tuning guide for details.

Fine-tune this model

On-demand deployments

On-demand deployments allow you to use Deepseek R1 Distill Qwen 7B on dedicated GPUs with Fireworks' high-performance serving stack with high reliability and no rate limits.

See the On-demand deployments guide for details.

Deploy this Base model