Fireworks RFT now available! Fine-tune open models that outperform frontier models. Try today

Model Library
/Mistral/Mixtral MoE 8x22B Instruct
Mistral Logo Icon

Mixtral MoE 8x22B Instruct

Ready
fireworks/mixtral-8x22b-instruct

    Mixtral MoE 8x22B Instruct v0.1 is the instruction-tuned version of Mixtral MoE 8x22B v0.1 and has the chat completions API enabled.

    Mixtral MoE 8x22B Instruct API Features

    On-demand Deployment

    Docs

    On-demand deployments give you dedicated GPUs for Mixtral MoE 8x22B Instruct using Fireworks' reliable, high-performance system with no rate limits.

    Metadata

    State
    Ready
    Created on
    4/17/2024
    Kind
    Base model
    Provider
    Mistral
    Hugging Face
    Mixtral-8x22B-Instruct-v0.1

    Specification

    Calibrated
    No
    Mixture-of-Experts
    Yes
    Parameters
    140.6B

    Supported Functionality

    Fine-tuning
    Not supported
    Serverless
    Not supported
    Serverless LoRA
    Supported
    Context Length
    65.5k tokens
    Function Calling
    Supported
    Embeddings
    Not supported
    Rerankers
    Not supported
    Support image input
    Not supported