CodeQwen1.5 is based on Qwen1.5, a language model series including decoder language models of different model sizes. It is trained on 3 trillion tokens of data of codes, and it includes group query attention (GQA) for efficient inference.
CodeQwen 1.5 7B can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model
Learn MoreOn-demand deployments give you dedicated GPUs for CodeQwen 1.5 7B using Fireworks' reliable, high-performance system with no rate limits.
Learn MoreQwen
65536
Available
$0.2