Fireworks AI raises $250M Series C to power the future of enterprise AI. Read more

Model Library
/Mistral/Mixtral Moe 8x22B
Mistral Logo Icon

Mixtral Moe 8x22B

Ready
fireworks/mixtral-8x22b

    The Mixtral MoE 8x22B v0.1 Large Language Model (LLM) is a pretrained generative sparse Mixture-of-Experts model fluent in English, French, Italian, German, and Spanish, with a focus on mathematics and coding tasks.

    Mixtral Moe 8x22B API Features

    On-demand Deployment

    Docs

    On-demand deployments give you dedicated GPUs for Mixtral Moe 8x22B using Fireworks' reliable, high-performance system with no rate limits.

    Metadata

    State
    Ready
    Created on
    4/10/2024
    Kind
    Base model
    Provider
    Mistral
    Hugging Face
    Mixtral-8x22B-v0.1

    Specification

    Calibrated
    No
    Mixture-of-Experts
    Yes
    Parameters
    176B

    Supported Functionality

    Fine-tuning
    Not supported
    Serverless
    Not supported
    Serverless LoRA
    Supported
    Context Length
    65.5k tokens
    Function Calling
    Not supported
    Embeddings
    Not supported
    Rerankers
    Not supported
    Support image input
    Not supported