DeepSeek R1 0528, an updated version of the state-of-the-art DeepSeek R1 model, is now available. Try it now!

Mistral Logo Icon

Mistral 7B Instruct v0.3

Mistral 7B Instruct v0.3 is an instruction fine-tuned version of the Mistral 7B v0.3 language model. It features an extended vocabulary of 32,768 tokens, supports the v3 tokenizer, and includes function calling capabilities. Optimized for instruction-following tasks, it excels in generating responses for chat applications, code generation, and understanding structured prompts. The model is designed to work efficiently with the Mistral inference library, enabling developers to deploy it across various natural language processing applications.

Try Model

Fireworks Features

On-demand Deployment

On-demand deployments give you dedicated GPUs for Mistral 7B Instruct v0.3 using Fireworks' reliable, high-performance system with no rate limits.

Learn More

Info

Provider

Mistral

Model Type

LLM

Context Length

32768

Pricing Per 1M Tokens

$0.2