Qwen3 VL 235B A22B Thinking is a state-of-the-art vision-language model with 22 billion activated parameters and 235 billion total parameters. It enables enhanced visual perception and reasoning, supporting contexts up to 256K tokens. To ensure sufficient GPU memory capacity, we recommend deploying this model on 8 NVIDIA H200 GPUs.
Fine-tuningDocs | Qwen3 VL 235B A22B Thinking can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model |
On-demand DeploymentDocs | On-demand deployments allow you to use Qwen3 VL 235B A22B Thinking on dedicated GPUs with Fireworks' high-performance serving stack with high reliability and no rate limits. |