Try the latest GLM-4.6 with extended context, superior coding, and refined intelligence. Now available on-demand

Model Library
/Mistral/Mistral 7B Instruct v0.3
Mistral Logo Icon

Mistral 7B Instruct v0.3

Ready
fireworks/mistral-7b-instruct-v3

    Mistral 7B Instruct v0.3 is an instruction fine-tuned version of the Mistral 7B v0.3 language model. It features an extended vocabulary of 32,768 tokens, supports the v3 tokenizer, and includes function calling capabilities. Optimized for instruction-following tasks, it excels in generating responses for chat applications, code generation, and understanding structured prompts. The model is designed to work efficiently with the Mistral inference library, enabling developers to deploy it across various natural language processing applications.

    Fireworks Features

    Fine-tuning

    Docs

    Mistral 7B Instruct v0.3 can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model

    On-demand Deployment

    Docs

    On-demand deployments give you dedicated GPUs for Mistral 7B Instruct v0.3 using Fireworks' reliable, high-performance system with no rate limits.

    Metadata

    State
    Ready
    Created on
    5/29/2024
    Kind
    Base model
    Provider
    Mistral
    Hugging Face
    Mistral-7B-Instruct-v0.3

    Specification

    Calibrated
    No
    Mixture-of-Experts
    No
    Parameters
    7.2B

    Supported Functionality

    Fine-tuning
    Supported
    Serverless
    Not supported
    Serverless LoRA
    Supported
    Context Length
    32.8k tokens
    Function Calling
    Supported
    Embeddings
    Not supported
    Rerankers
    Not supported
    Support image input
    Not supported