Qwen /
Qwen2.5-Coder-7B
accounts/fireworks/models/qwen2p5-coder-7b
LLMTunableChat
Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). For Qwen2.5-Coder, we release three base language models and instruction-tuned language models, 1.5, 7 and 32 billion parameters.
Fine-tuning
Qwen2.5-Coder-7B can be fine-tuned on your data to create a model with better response quality. Fireworks uses low-rank adaptation (LoRA) to train a model that can be served efficiently at inference time.
See the Fine-tuning guide for details.
On-demand deployments
On-demand deployments allow you to use Qwen2.5-Coder-7B on dedicated GPUs with Fireworks' high-performance serving stack with high reliability and no rate limits.
See the On-demand deployments guide for details.