Excited to announce that Fireworks Training is now in preview. Train and deploy frontier models on one platform. Learn more

Model Library
/Mistral/Mixtral MoE 8x7B Instruct (HF version)
Mistral Logo Icon

Mixtral MoE 8x7B Instruct (HF version)

Ready
fireworks/mixtral-8x7b-instruct-hf

    Mixtral MoE 8x7B Instruct (HF Version) is the original, FP16 version of Mixtral MoE 8x7B Instruct whose results should be consistent with the official Hugging Face implementation.

    Mixtral MoE 8x7B Instruct (HF version) API Features

    Fine-tuning

    Docs

    Mixtral MoE 8x7B Instruct (HF version) can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model

    On-demand Deployment

    Docs

    On-demand deployments give you dedicated GPUs for Mixtral MoE 8x7B Instruct (HF version) using Fireworks' reliable, high-performance system with no rate limits.

    Metadata

    State
    Ready
    Created on
    2/6/2024
    Kind
    Base model
    Provider
    Mistral
    Hugging Face
    Mixtral-8x7B-Instruct-v0.1

    Specification

    Calibrated
    No
    Mixture-of-Experts
    Yes
    Parameters
    46.7B

    Supported Functionality

    Fine-tuning
    Supported
    Serverless
    Not supported
    Context Length
    32.8k tokens
    Function Calling
    Not supported
    Embeddings
    Not supported
    Rerankers
    Not supported
    Support image input
    Not supported