Kimi K2 Instruct, a 1T parameter model with state of the art quality for coding, reasoning, and agentic tool use, is now available on Fireworks! Try now

Mistral Logo Icon

Mixtral Moe 8x22B

The Mixtral MoE 8x22B v0.1 Large Language Model (LLM) is a pretrained generative sparse Mixture-of-Experts model fluent in English, French, Italian, German, and Spanish, with a focus on mathematics and coding tasks.

Try Model

Fireworks Features

On-demand Deployment

On-demand deployments give you dedicated GPUs for Mixtral Moe 8x22B using Fireworks' reliable, high-performance system with no rate limits.

Learn More

Info

Provider

Mistral

Model Type

LLM

Context Length

65536

Pricing Per 1M Tokens

$1.2