Qwen3.5-27B (Qwen Chat) is a post-trained, chat-optimized large language model released in Hugging Face Transformers format. It’s designed for strong general-purpose performance across reasoning, coding, and agentic tasks, and is compatible with popular inference stacks like Transformers, vLLM, and SGLang. Qwen3.5 emphasizes improved efficiency and scalability, with broader multilingual coverage and training advances aimed at high-utility real-world deployment.
On-demand DeploymentDocs | On-demand deployments give you dedicated GPUs for Qwen3.5 27B using Fireworks' reliable, high-performance system with no rate limits. |