Fireworks AI raises $250M Series C to power the future of enterprise AI. Read more

Model Library
/Mistral/Mistral Nemo Instruct 2407
Mistral Logo Icon

Mistral Nemo Instruct 2407

Ready
fireworks/mistral-nemo-instruct-2407

    The Mistral-Nemo-Instruct-2407 Large Language Model (LLM) is the instruction-tuned version of Mistral-Nemo-Base-2407 and has the chat completions API enabled. Trained jointly by Mistral AI and NVIDIA, it significantly outperforms existing models smaller or similar in size.

    Mistral Nemo Instruct 2407 API Features

    Fine-tuning

    Docs

    Mistral Nemo Instruct 2407 can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model

    On-demand Deployment

    Docs

    On-demand deployments give you dedicated GPUs for Mistral Nemo Instruct 2407 using Fireworks' reliable, high-performance system with no rate limits.

    Metadata

    State
    Ready
    Created on
    7/22/2024
    Kind
    Base model
    Provider
    Mistral
    Hugging Face
    Mistral-Nemo-Instruct-2407

    Specification

    Calibrated
    Yes
    Mixture-of-Experts
    No
    Parameters
    12.2B

    Supported Functionality

    Fine-tuning
    Supported
    Serverless
    Not supported
    Serverless LoRA
    Supported
    Context Length
    128k tokens
    Function Calling
    Not supported
    Embeddings
    Not supported
    Rerankers
    Not supported
    Support image input
    Not supported