DeepSeek V4 Pro is Live → Try it now.

Model Library
/Qwen/Qwen3 VL 235B A22B Thinking
Quen Logo Mark

Qwen3 VL 235B A22B Thinking

Ready
accounts/fireworks/models/qwen3-vl-235b-a22b-thinking

    Qwen3 VL 235B A22B Thinking is a state-of-the-art vision-language model with 22 billion activated parameters and 235 billion total parameters. It enables enhanced visual perception and reasoning, supporting contexts up to 256K tokens. To ensure sufficient GPU memory capacity, we recommend deploying this model on 8 NVIDIA H200 GPUs.

    Qwen3 VL 235B A22B Thinking API Features

    Fine-tuning

    Docs

    Qwen3 VL 235B A22B Thinking can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model

    On-demand Deployment

    Docs

    On-demand deployments allow you to use Qwen3 VL 235B A22B Thinking on dedicated GPUs with Fireworks' high-performance serving stack with high reliability and no rate limits.

    Metadata

    State
    Ready
    Created on
    9/24/2025
    Kind
    Base model
    Provider
    Qwen

    Specification

    Calibrated
    No
    Mixture-of-Experts
    Yes
    Parameters
    235B

    Supported Functionality

    Fine-tuning
    Supported
    Serverless
    Not supported
    Context Length
    262k tokens
    Function Calling
    Supported
    Embeddings
    Not supported
    Rerankers
    Not supported
    Support image input
    Supported