Try the latest GLM-4.6 with extended context, superior coding, and refined intelligence. Now available on-demand

Model Library
/Mistral/Mixtral MoE 8x7B Instruct (HF version)
Mistral Logo Icon

Mixtral MoE 8x7B Instruct (HF version)

Ready
fireworks/mixtral-8x7b-instruct-hf

    Mixtral MoE 8x7B Instruct (HF Version) is the original, FP16 version of Mixtral MoE 8x7B Instruct whose results should be consistent with the official Hugging Face implementation.

    Fireworks Features

    On-demand Deployment

    Docs

    On-demand deployments give you dedicated GPUs for Mixtral MoE 8x7B Instruct (HF version) using Fireworks' reliable, high-performance system with no rate limits.

    Metadata

    State
    Ready
    Created on
    2/6/2024
    Kind
    Base model
    Provider
    Mistral
    Hugging Face
    Mixtral-8x7B-Instruct-v0.1

    Specification

    Calibrated
    No
    Mixture-of-Experts
    Yes
    Parameters
    46.7B

    Supported Functionality

    Fine-tuning
    Not supported
    Serverless
    Not supported
    Serverless LoRA
    Supported
    Context Length
    32.8k tokens
    Function Calling
    Not supported
    Embeddings
    Not supported
    Rerankers
    Not supported
    Support image input
    Not supported