Medium-sized reasoning model from Qwen.
Fine-tuningDocs | QWQ 32B can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model |
On-demand DeploymentDocs | On-demand deployments allow you to use QWQ 32B on dedicated GPUs with Fireworks' high-performance serving stack with high reliability and no rate limits. |