Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. Compared to previous versions, Qwen1.5 has significant performance improvements, multilingual support, and stable 32k context support.
Fine-tuningDocs | Qwen1.5 72B Chat can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model |
On-demand DeploymentDocs | On-demand deployments give you dedicated GPUs for Qwen1.5 72B Chat using Fireworks' reliable, high-performance system with no rate limits. |