The Mixtral MoE 8x22B v0.1 Large Language Model (LLM) is a pretrained generative sparse Mixture-of-Experts model fluent in English, French, Italian, German, and Spanish, with a focus on mathematics and coding tasks.
On-demand DeploymentDocs | On-demand deployments allow you to use Mixtral Moe 8x22B on dedicated GPUs with Fireworks' high-performance serving stack with high reliability and no rate limits. |