Try the latest GLM-4.6 with extended context, superior coding, and refined intelligence. Now available on-demand

Model Library
/Mistral/Mistral Small 24B Instruct 2501
Mistral Logo Icon

Mistral Small 24B Instruct 2501

Ready
fireworks/mistral-small-24b-instruct-2501

    Mistral Small 3 ( 2501 ) sets a new benchmark in the "small" Large Language Models category below 70B, boasting 24B parameters and achieving state-of-the-art capabilities comparable to larger models!

    Fireworks Features

    Fine-tuning

    Docs

    Mistral Small 24B Instruct 2501 can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model

    On-demand Deployment

    Docs

    On-demand deployments give you dedicated GPUs for Mistral Small 24B Instruct 2501 using Fireworks' reliable, high-performance system with no rate limits.

    Metadata

    State
    Ready
    Created on
    1/30/2025
    Kind
    Base model
    Provider
    Mistral
    Hugging Face
    Mistral-Small-24B-Instruct-2501

    Specification

    Calibrated
    No
    Mixture-of-Experts
    No
    Parameters
    23.6B

    Supported Functionality

    Fine-tuning
    Supported
    Serverless
    Not supported
    Serverless LoRA
    Supported
    Context Length
    32.8k tokens
    Function Calling
    Not supported
    Embeddings
    Not supported
    Rerankers
    Not supported
    Support image input
    Not supported