Mixtral MoE 8x7B Instruct (HF Version) is the original, FP16 version of Mixtral MoE 8x7B Instruct whose results should be consistent with the official Hugging Face implementation.
Fine-tuningDocs | Mixtral MoE 8x7B Instruct (HF version) can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model |
On-demand DeploymentDocs | On-demand deployments give you dedicated GPUs for Mixtral MoE 8x7B Instruct (HF version) using Fireworks' reliable, high-performance system with no rate limits. |