qwen 1.7b fp8 used as draft model for 131072 context length
Fine-tuningDocs | Qwen3 1.7B fp8 model used for drafting for 131072 context can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model |
On-demand DeploymentDocs | On-demand deployments give you dedicated GPUs for Qwen3 1.7B fp8 model used for drafting for 131072 context using Fireworks' reliable, high-performance system with no rate limits. |