LLMTunableChat
Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. Compared to previous versions, Qwen1.5 has significant performance improvements, multilingual support, and stable 32k context support.
Qwen1.5 72B Chat can be fine-tuned on your data to create a model with better response quality. Fireworks uses low-rank adaptation (LoRA) to train a model that can be served efficiently at inference time.
See the Fine-tuning guide for details.
On-demand deployments allow you to use Qwen1.5 72B Chat on dedicated GPUs with Fireworks' high-performance serving stack with high reliability and no rate limits.
See the On-demand deployments guide for details.