Skip to main content
Meta Llama /

Llama 2 70B Chat

accounts/fireworks/models/llama-v2-70b-chat

LLMTunableChat

Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the 70B fine-tuned model, optimized for dialogue use cases.

Fine-tuning

Llama 2 70B Chat can be fine-tuned on your data to create a model with better response quality. Fireworks uses low-rank adaptation (LoRA) to train a model that can be served efficiently at inference time.

See the Fine-tuning guide for details.

Fine-tune this model

On-demand deployments

On-demand deployments allow you to use Llama 2 70B Chat on dedicated GPUs with Fireworks' high-performance serving stack with high reliability and no rate limits.

See the On-demand deployments guide for details.

Deploy this Base model