Fireworks RFT now available! Fine-tune open models that outperform frontier models. Try today

Model Library
/Mistral/Mixtral MoE 8x7B Instruct (HF version)
Mistral Logo Icon

Mixtral MoE 8x7B Instruct (HF version)

Ready
fireworks/mixtral-8x7b-instruct-hf

    Mixtral MoE 8x7B Instruct (HF Version) is the original, FP16 version of Mixtral MoE 8x7B Instruct whose results should be consistent with the official Hugging Face implementation.

    Mixtral MoE 8x7B Instruct (HF version) API Features

    On-demand Deployment

    Docs

    On-demand deployments give you dedicated GPUs for Mixtral MoE 8x7B Instruct (HF version) using Fireworks' reliable, high-performance system with no rate limits.

    Metadata

    State
    Ready
    Created on
    2/6/2024
    Kind
    Base model
    Provider
    Mistral
    Hugging Face
    Mixtral-8x7B-Instruct-v0.1

    Specification

    Calibrated
    No
    Mixture-of-Experts
    Yes
    Parameters
    46.7B

    Supported Functionality

    Fine-tuning
    Not supported
    Serverless
    Not supported
    Serverless LoRA
    Supported
    Context Length
    32.8k tokens
    Function Calling
    Not supported
    Embeddings
    Not supported
    Rerankers
    Not supported
    Support image input
    Not supported