Mistral 7B Instruct v0.2 is an instruction fine-tuned version of the Mistral 7B v0.2 language model. This update introduces key improvements over v0.1, including an expanded context window of 32k tokens (up from 8k), a Rope-theta value of 1e6, and the removal of Sliding-Window Attention. Designed to excel in generating coherent and contextually rich responses, it is suitable for a wide range of natural language processing tasks.
Mistral 7B Instruct v0.2 can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model
On-demand deployments give you dedicated GPUs for Mistral 7B Instruct v0.2 using Fireworks' reliable, high-performance system with no rate limits.
Mistral
32768
Available
$0.2