Try the latest GLM-4.6 with extended context, superior coding, and refined intelligence. Now available on-demand

Model Library
/Mistral/Mixtral MoE 8x22B Instruct
Mistral Logo Icon

Mixtral MoE 8x22B Instruct

fireworks/mixtral-8x22b-instruct

    Mixtral MoE 8x22B Instruct v0.1 is the instruction-tuned version of Mixtral MoE 8x22B v0.1 and has the chat completions API enabled.

    Fireworks Features

    Serverless

    Docs

    Immediately run model on pre-configured GPUs and pay-per-token

    On-demand Deployment

    Docs

    On-demand deployments give you dedicated GPUs for Mixtral MoE 8x22B Instruct using Fireworks' reliable, high-performance system with no rate limits.

    Available Serverless

    Run queries immediately, pay only for usage

    $1.20 / $1.20
    Per 1M Tokens (input/output)

    Metadata

    State
    Unknown
    Created on
    N/A
    Kind
    Unknown
    Provider
    Mistral

    Specification

    Calibrated
    No
    Mixture-of-Experts
    No
    Parameters
    N/A

    Supported Functionality

    Fine-tuning
    Not supported
    Serverless
    Supported
    Serverless LoRA
    Not supported
    Context Length
    65.5k tokens
    Function Calling
    Not supported
    Embeddings
    Not supported
    Rerankers
    Not supported
    Support image input
    Not supported