Fireworks RFT now available! Fine-tune open models that outperform frontier models. Try today

Model Library
/Meta/Llama 3.1 405B Instruct
Meta Mark

Llama 3.1 405B Instruct

Ready
fireworks/llama-v3p1-405b-instruct

    The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models in 8B, 70B and 405B sizes. The Llama 3.1 instruction tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks. 405B model is the most capable from the Llama 3.1 family. This model is served in FP8 closely matching reference implementation.

    Llama 3.1 405B Instruct API Features

    On-demand Deployment

    Docs

    On-demand deployments give you dedicated GPUs for Llama 3.1 405B Instruct using Fireworks' reliable, high-performance system with no rate limits.

    Llama 3.1 405B Instruct FAQs

    What is Llama 3.1 405B Instruct and who developed it?

    Llama 3.1 405B Instruct is a multilingual instruction-tuned large language model developed by Meta. It is the largest and most capable model in the Llama 3.1 family, which includes 8B, 70B, and 405B variants.

    What applications and use cases does Llama 3.1 405B Instruct excel at?

    The model is optimized for:

    • Multilingual chat-based applications
    • Tool use and function calling
    • Complex reasoning and math
    • Code generation (HumanEval, MBPP)
    • Multilingual instruction following
    What is the maximum context length for Llama 3.1 405B Instruct?

    The model supports a maximum context length of 131.1k tokens.

    What is the usable context window for Llama 3.1 405B Instruct?

    The maximum is 131.1k tokens. Developers should monitor latency and accuracy across longer prompts.

    What are known failure modes of Llama 3.1 405B Instruct?

    Meta outlines the following:

    • May generate biased or objectionable outputs in edge cases
    • Refusal behavior may be inconsistent depending on tone/context
    • Performance in non-supported languages (beyond the 8 officially supported) is not guaranteed
    • Integration with third-party tools requires additional safeguards
    Does Llama 3.1 405B Instruct support function-calling schemas?

    Function calling is supported, including tool use via Transformers templates.

    How many parameters does Llama 3.1 405B Instruct have?

    The model has 410.1 billion parameters.

    Is fine-tuning supported for Llama 3.1 405B Instruct?

    Full fine-tuning is not supported, but LoRA fine-tuning is available on Fireworks via on-demand deployment.

    What rate limits apply on the shared endpoint?

    Llama 3.1 405B Instruct is only available via on-demand deployment on Fireworks, which has no rate limits.

    What license governs commercial use of Llama 3.1 405B Instruct?

    The model is released under the Llama 3.1 Community License, which permits commercial use under specific conditions. Full terms: GitHub License.

    Metadata

    State
    Ready
    Created on
    7/19/2024
    Kind
    Base model
    Provider
    Meta
    Hugging Face
    Llama-3.1-405B-Instruct

    Specification

    Calibrated
    No
    Mixture-of-Experts
    No
    Parameters
    410.1B

    Supported Functionality

    Fine-tuning
    Not supported
    Serverless
    Not supported
    Serverless LoRA
    Supported
    Context Length
    131.1k tokens
    Function Calling
    Supported
    Embeddings
    Not supported
    Rerankers
    Not supported
    Support image input
    Not supported