Excited to announce that Fireworks Training is now in preview. Train and deploy frontier models on one platform. Learn more

Model Library
/Qwen/Qwen3.5 27B
fireworks/qwen3p5-27b

    Qwen3.5-27B (Qwen Chat) is a post-trained, chat-optimized large language model released in Hugging Face Transformers format. It’s designed for strong general-purpose performance across reasoning, coding, and agentic tasks, and is compatible with popular inference stacks like Transformers, vLLM, and SGLang. Qwen3.5 emphasizes improved efficiency and scalability, with broader multilingual coverage and training advances aimed at high-utility real-world deployment.

    Qwen3.5 27B API Features

    Fine-tuning

    Docs

    Qwen3.5 27B can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model

    On-demand Deployment

    Docs

    On-demand deployments give you dedicated GPUs for Qwen3.5 27B using Fireworks' reliable, high-performance system with no rate limits.

    Metadata

    State
    Unknown
    Created on
    N/A
    Kind
    Unknown
    Provider
    Qwen

    Specification

    Calibrated
    No
    Mixture-of-Experts
    No
    Parameters
    N/A

    Supported Functionality

    Fine-tuning
    Supported
    Serverless
    Not supported
    Context Length
    262.1k tokens
    Function Calling
    Not supported
    Embeddings
    Not supported
    Rerankers
    Not supported
    Support image input
    Not supported