qwen 1.7b fp8 used as draft model for 40960 context length
Qwen3 1.7B fp8 model used for drafting for 40960 context length can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model
Learn MoreOn-demand deployments give you dedicated GPUs for Qwen3 1.7B fp8 model used for drafting for 40960 context length using Fireworks' reliable, high-performance system with no rate limits.
Learn MoreFireworks AI
40960
Available
$0.1