Fireworks AI raises $250M Series C to power the future of enterprise AI. Read more

Model Library
/Mistral/Mixtral 8x7B v0.1
Mistral Logo Icon

Mixtral 8x7B v0.1

Ready
fireworks/mixtral-8x7b

    Mixtral 8x7B v0.1 is a sparse mixture-of-experts (SMoE) large language model developed by Mistral AI. With 46.7 billion total parameters and 12.9 billion active parameters per token, it outperforms Llama 2 70B and matches GPT-3.5 on many benchmarks while offering efficient inference. The model handles context lengths up to 32k tokens, supports multiple languages including English, French, Italian, German, and Spanish, and excels in code generation tasks. Licensed under Apache 2.0, Mixtral provides a powerful and efficient solution for diverse NLP applications.

    Mixtral 8x7B v0.1 API Features

    On-demand Deployment

    Docs

    On-demand deployments give you dedicated GPUs for Mixtral 8x7B v0.1 using Fireworks' reliable, high-performance system with no rate limits.

    Metadata

    State
    Ready
    Created on
    12/9/2023
    Kind
    Base model
    Provider
    Mistral
    Hugging Face
    Mixtral-8x7B-v0.1

    Specification

    Calibrated
    No
    Mixture-of-Experts
    Yes
    Parameters
    46B

    Supported Functionality

    Fine-tuning
    Not supported
    Serverless
    Not supported
    Serverless LoRA
    Supported
    Context Length
    32.8k tokens
    Function Calling
    Not supported
    Embeddings
    Not supported
    Rerankers
    Not supported
    Support image input
    Not supported