Kimi K2 0905, a new state of the art open weight models for agentic reasoning, tool use, and coding, is now available! Try Now

Mistral Logo Icon

Mixtral Moe 8x22B

The Mixtral MoE 8x22B v0.1 Large Language Model (LLM) is a pretrained generative sparse Mixture-of-Experts model fluent in English, French, Italian, German, and Spanish, with a focus on mathematics and coding tasks.

Try Model

Fireworks Features

On-demand Deployment

On-demand deployments give you dedicated GPUs for Mixtral Moe 8x22B using Fireworks' reliable, high-performance system with no rate limits.

Learn More

Info & Pricing

Provider

Mistral

Model Type

LLM

Context Length

65536

Pricing Per 1M Tokens

$1.2