The Mixtral MoE 8x22B v0.1 Large Language Model (LLM) is a pretrained generative sparse Mixture-of-Experts model fluent in English, French, Italian, German, and Spanish, with a focus on mathematics and coding tasks.
On-demand deployments give you dedicated GPUs for Mixtral Moe 8x22B using Fireworks' reliable, high-performance system with no rate limits.
Learn MoreMistral
65536
$1.2