DeepSeek R1 0528, an updated version of the state-of-the-art DeepSeek R1 model, is now available. Try it now!

Mistral Logo Icon

Mixtral MoE 8x7B Instruct (HF version)

Mixtral MoE 8x7B Instruct (HF Version) is the original, FP16 version of Mixtral MoE 8x7B Instruct whose results should be consistent with the official Hugging Face implementation.

Try Model

Fireworks Features

On-demand Deployment

On-demand deployments give you dedicated GPUs for Mixtral MoE 8x7B Instruct (HF version) using Fireworks' reliable, high-performance system with no rate limits.

Learn More

Info

Provider

Mistral

Model Type

LLM

Context Length

32768

Pricing Per 1M Tokens

$0.5