Fireworks RFT now available! Fine-tune open models that outperform frontier models. Try today

Model Library
/Google/Gemma 3 27B Instruct
Google Ai Logo Mark

Gemma 3 27B Instruct

Ready
fireworks/gemma-3-27b-it

    Gemma 3 27B Instruct

    Gemma 3 27B Instruct API Features

    Fine-tuning

    Docs

    Gemma 3 27B Instruct can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model

    On-demand Deployment

    Docs

    On-demand deployments give you dedicated GPUs for Gemma 3 27B Instruct using Fireworks' reliable, high-performance system with no rate limits.

    Gemma 3 27B Instruct FAQs

    What is Gemma 3 27B Instruct and who developed it?

    Gemma 3 27B Instruct is an instruction-tuned, open-weight model developed by Google DeepMind. It is part of the Gemma 3 family and was released in 2025 as a lightweight, high-performance alternative to Gemini. This model supports text generation and is optimized for multi-turn dialogue, summarization, and reasoning tasks.

    What applications and use cases does Gemma 3 27B Instruct excel at?

    The model is designed for:

    • Chatbots and conversational AI
    • Code generation and reasoning (HumanEval: 48.8)
    • Document and image summarization (text only on Fireworks)
    • Question answering (MMLU: 78.6, GSM8K: 82.6)
    • Multilingual tasks in over 140 languages
    What is the maximum context length for Gemma 3 27B Instruct?

    The model supports a maximum context length of 131.1k tokens.

    What is the usable context window for Gemma 3 27B Instruct?

    Gemma 3 models perform reliably across long inputs, but no specific usable token window is provided. Performance may vary near the 131k token upper bound.

    What are known failure modes of Gemma 3 27B Instruct?

    Limitations and risks include:

    • Limited spatial and common-sense reasoning
    • Occasional factual errors or outdated knowledge
    • May reflect bias from training data despite filtering
    • English-only evaluation for ethics and safety benchmarks
    Does Gemma 3 27B Instruct support function-calling schemas?

    No, function calling is not supported for this model.

    How many parameters does Gemma 3 27B Instruct have?

    The model has 28.4 billion parameters (not a MoE architecture).

    Is fine-tuning supported for Gemma 3 27B Instruct?

    Yes, Fireworks supports LoRA fine-tuning, serverless LoRA, and full fine-tuning for Gemma 3 27B Instruct.

    What rate limits apply on the shared endpoint?

    On-demand deployment is supported with no rate limits.

    What license governs commercial use of Gemma 3 27B Instruct?

    Gemma 3 27B Instruct is governed by the Gemma license, which permits commercial use.

    Metadata

    State
    Ready
    Created on
    4/15/2025
    Kind
    Base model
    Provider
    Google
    Hugging Face
    gemma-3-27b-it

    Specification

    Calibrated
    No
    Mixture-of-Experts
    No
    Parameters
    28.4B

    Supported Functionality

    Fine-tuning
    Supported
    Serverless
    Not supported
    Serverless LoRA
    Supported
    Context Length
    131.1k tokens
    Function Calling
    Not supported
    Embeddings
    Not supported
    Rerankers
    Not supported
    Support image input
    Not supported