DeepSeek R1 0528, an updated version of the state-of-the-art DeepSeek R1 model, is now available. Try it now!

Mistral Logo Icon

Mixtral Moe 8x22B

The Mixtral MoE 8x22B v0.1 Large Language Model (LLM) is a pretrained generative sparse Mixture-of-Experts model fluent in English, French, Italian, German, and Spanish, with a focus on mathematics and coding tasks.

Try Model

Fireworks Features

On-demand Deployment

On-demand deployments give you dedicated GPUs for Mixtral Moe 8x22B using Fireworks' reliable, high-performance system with no rate limits.

Learn More

Info

Provider

Mistral

Model Type

LLM

Context Length

65536

Pricing Per 1M Tokens

$1.2