The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models in 8B, 70B and 405B sizes. The Llama 3.1 instruction tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks. 405B model is the most capable from the Llama 3.1 family. This model is served in FP8 closely matching reference implementation.
On-demand DeploymentDocs | On-demand deployments give you dedicated GPUs for Llama 3.1 405B Instruct using Fireworks' reliable, high-performance system with no rate limits. |
Llama 3.1 405B Instruct is a multilingual instruction-tuned large language model developed by Meta. It is the largest and most capable model in the Llama 3.1 family, which includes 8B, 70B, and 405B variants.
The model is optimized for:
The model supports a maximum context length of 131.1k tokens.
The maximum is 131.1k tokens. Developers should monitor latency and accuracy across longer prompts.
Meta outlines the following:
Function calling is supported, including tool use via Transformers templates.
The model has 410.1 billion parameters.
Full fine-tuning is not supported, but LoRA fine-tuning is available on Fireworks via on-demand deployment.
The model is released under the Llama 3.1 Community License, which permits commercial use under specific conditions. Full terms: GitHub License.