OpenAI gpt-oss-120b & 20b, open weight models designed for reasoning, agentic tasks, and versatile developer use cases is now available! Try Now

Mistral Logo Icon

Mixtral MoE 8x22B Instruct

Mixtral MoE 8x22B Instruct v0.1 is the instruction-tuned version of Mixtral MoE 8x22B v0.1 and has the chat completions API enabled.

Try Model

Fireworks Features

Serverless

Immediately run model on pre-configured GPUs and pay-per-token

Learn More

On-demand Deployment

On-demand deployments give you dedicated GPUs for Mixtral MoE 8x22B Instruct using Fireworks' reliable, high-performance system with no rate limits.

Learn More

Info

Provider

Mistral

Model Type

LLM

Context Length

65536

Serverless

Available

Pricing Per 1M Tokens

$1.2