OpenAI gpt-oss-120b & 20b, open weight models designed for reasoning, agentic tasks, and versatile developer use cases is now available! Try Now

Mistral Logo Icon

Mixtral Moe 8x22B

The Mixtral MoE 8x22B v0.1 Large Language Model (LLM) is a pretrained generative sparse Mixture-of-Experts model fluent in English, French, Italian, German, and Spanish, with a focus on mathematics and coding tasks.

Try Model

Fireworks Features

On-demand Deployment

On-demand deployments give you dedicated GPUs for Mixtral Moe 8x22B using Fireworks' reliable, high-performance system with no rate limits.

Learn More

Info

Provider

Mistral

Model Type

LLM

Context Length

65536

Pricing Per 1M Tokens

$1.2