qwen 1.7b fp8 used as draft model for 40960 context length
Fine-tuningDocs | Qwen3 1.7B fp8 model used for drafting for 40960 context length can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model |
On-demand DeploymentDocs | On-demand deployments give you dedicated GPUs for Qwen3 1.7B fp8 model used for drafting for 40960 context length using Fireworks' reliable, high-performance system with no rate limits. |