Join us for "Own Your AI" night on 10/1 in SF featuring Meta, Uber, Upwork, and AWS. Register here

Mistral Logo Icon

Mistral 7B Instruct v0.2

Mistral 7B Instruct v0.2 is an instruction fine-tuned version of the Mistral 7B v0.2 language model. This update introduces key improvements over v0.1, including an expanded context window of 32k tokens (up from 8k), a Rope-theta value of 1e6, and the removal of Sliding-Window Attention. Designed to excel in generating coherent and contextually rich responses, it is suitable for a wide range of natural language processing tasks.

Try Model

Fireworks Features

Fine-tuning

Mistral 7B Instruct v0.2 can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model

Learn More

On-demand Deployment

On-demand deployments give you dedicated GPUs for Mistral 7B Instruct v0.2 using Fireworks' reliable, high-performance system with no rate limits.

Learn More

Info & Pricing

Provider

Mistral

Model Type

LLM

Context Length

32768

Fine-Tuning

Available

Pricing Per 1M Tokens

$0.2