Fireworks RFT now available! Fine-tune open models that outperform frontier models. Try today

Model Library
/Qwen/Qwen2.5-VL 3B Instruct
Quen Logo Mark

Qwen2.5-VL 3B Instruct

Ready
fireworks/qwen2p5-vl-3b-instruct

    Qwen2.5-VL is a multimodal large language model series developed by Qwen team, Alibaba Cloud, available in 3B, 7B, 32B, and 72B sizes

    Qwen2.5-VL 3B Instruct API Features

    Fine-tuning

    Docs

    Qwen2.5-VL 3B Instruct can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model

    On-demand Deployment

    Docs

    On-demand deployments give you dedicated GPUs for Qwen2.5-VL 3B Instruct using Fireworks' reliable, high-performance system with no rate limits.

    Qwen2.5-VL 3B Instruct FAQS

    What is Qwen2.5-VL 3B Instruct and who developed it?

    Qwen2.5-VL 3B Instruct is a 4.1 billion parameter instruction-tuned multimodal model developed by the Qwen team at Alibaba Cloud. It is part of the Qwen2.5-VL series and supports image-text understanding, structured vision outputs, and tool-using capabilities for agentic tasks.

    What applications and use cases does Qwen2.5-VL 3B Instruct excel at?

    Qwen2.5-VL 3B Instruct is designed for:

    • Image and document understanding (e.g., invoices, forms, charts)
    • Multimodal chat and agent-style reasoning
    • Video analysis with temporal awareness
    • Structured outputs such as bounding boxes and JSON-based localization
    • Agentic control tasks like UI navigation and object manipulation
    What is the maximum context length for Qwen2.5-VL 3B Instruct?

    The maximum context length for Qwen2.5-VL 3B is 128,000 tokens.

    Does Qwen2.5-VL 3B Instruct support quantized formats (4-bit/8-bit)?

    Yes. The model lists over 60 quantized versions, including 4-bit and 8-bit variants.

    What are known failure modes of Qwen2.5-VL 3B Instruct?

    The model performs well on benchmarks such as DocVQA (93.9), InfoVQA (77.1), and MathVista (62.3) but slightly underperforms larger Qwen2 variants in MMBench and MMStar. The model may have trade-offs in temporal and spatial localization when enabling certain extended context features like YaRN.

    Does Qwen2.5-VL 3B Instruct support streaming responses and function-calling schemas?

    No, streaming responses and function calling are not supported.

    How many parameters does Qwen2.5-VL 3B Instruct have?

    The model has 4.1 billion parameters.

    Is fine-tuning supported for Qwen2.5-VL 3B Instruct?

    Yes. Fireworks supports LoRA fine-tuning for this model.

    What rate limits apply on the shared endpoint?

    On-demand deployments are supported with no rate limits.

    What license governs commercial use of Qwen2.5-VL 3B Instruct?

    The model is licensed under the Qianwen License (similar to MIT), which permits commercial use and redistribution.

    Metadata

    State
    Ready
    Created on
    3/31/2025
    Kind
    Base model
    Provider
    Qwen
    Hugging Face
    Qwen2.5-VL-3B-Instruct

    Specification

    Calibrated
    Yes
    Mixture-of-Experts
    No
    Parameters
    4.1B

    Supported Functionality

    Fine-tuning
    Supported
    Serverless
    Not supported
    Serverless LoRA
    Supported
    Context Length
    128k tokens
    Function Calling
    Not supported
    Embeddings
    Not supported
    Rerankers
    Not supported
    Support image input
    Supported