Fireworks RFT now available! Fine-tune open models that outperform frontier models. Try today

Model Library
/Meta/Llama 3.2 3B Instruct
Meta Mark

Llama 3.2 3B Instruct

Ready
fireworks/llama-v3p2-3b-instruct

    The Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.

    Llama 3.2 3B Instruct API Features

    Fine-tuning

    Docs

    Llama 3.2 3B Instruct can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model

    On-demand Deployment

    Docs

    On-demand deployments give you dedicated GPUs for Llama 3.2 3B Instruct using Fireworks' reliable, high-performance system with no rate limits.

    Llama 3.2 3B Instruct FAQs

    What is Llama 3.2 3B Instruct and who developed it?

    Llama 3.2 3B Instruct is an instruction-tuned, multilingual large language model developed by Meta. It belongs to the Llama 3.2 family of models optimized for assistant-style dialogue, summarization, and retrieval use cases. The 3B variant includes approximately 3.21 billion parameters.

    What applications and use cases does Llama 3.2 3B Instruct excel at?

    This model is optimized for:

    • Multilingual assistant-style dialogue
    • Agentic retrieval and summarization
    • Mobile AI-powered writing tools
    • Prompt rewriting and code generation in supported languages
    What is the maximum context length for Llama 3.2 3B Instruct?

    The maximum context length is 131,072 tokens (131.1k).

    Does Llama 3.2 3B Instruct support quantized formats (4-bit/8-bit)?

    Yes. Quantized variants are available in 4-bit and 8-bit formats.

    What are known failure modes of Llama 3.2 3B Instruct?

    Known risks include:

    • Potential refusals to benign prompts
    • Limited alignment in unsupported languages
    • Risk of biased, inaccurate, or objectionable content

    Meta recommends pairing the model with system-level safeguards like Llama Guard or Prompt Guard. Safety red-teaming covered areas such as CBRNE threats, child safety, and cyberattacks.

    Does Llama 3.2 3B Instruct support streaming responses and function-calling schemas?

    Streaming and function calling are not supported for this model.

    How many parameters does Llama 3.2 3B Instruct have?

    The model has 3.21 billion parameters.

    Is fine-tuning supported for Llama 3.2 3B Instruct?

    Yes. Fireworks supports fine-tuning with LoRA for this model.

    What rate limits apply on the shared endpoint?

    On-demand deployments have no rate limits. Serverless is not supported for this model.

    What license governs commercial use of Llama 3.2 3B Instruct?

    Use of the model is governed by the Llama 3.2 Community License, which permits commercial use under specific terms set by Meta.

    Metadata

    State
    Ready
    Created on
    9/18/2024
    Kind
    Base model
    Provider
    Meta
    Hugging Face
    Llama-3.2-3B-Instruct

    Specification

    Calibrated
    No
    Mixture-of-Experts
    No
    Parameters
    3.6B

    Supported Functionality

    Fine-tuning
    Supported
    Serverless
    Not supported
    Serverless LoRA
    Supported
    Context Length
    131.1k tokens
    Function Calling
    Not supported
    Embeddings
    Not supported
    Rerankers
    Not supported
    Support image input
    Not supported