Kimi K2 0905, a new state of the art open weight models for agentic reasoning, tool use, and coding, is now available! Try Now

Mistral Logo Icon

Mixtral MoE 8x22B Instruct

Mixtral MoE 8x22B Instruct v0.1 is the instruction-tuned version of Mixtral MoE 8x22B v0.1 and has the chat completions API enabled.

Try Model

Fireworks Features

Serverless

Immediately run model on pre-configured GPUs and pay-per-token

Learn More

On-demand Deployment

On-demand deployments give you dedicated GPUs for Mixtral MoE 8x22B Instruct using Fireworks' reliable, high-performance system with no rate limits.

Learn More

Info & Pricing

Provider

Mistral

Model Type

LLM

Context Length

65536

Serverless

Available

Pricing Per 1M Tokens

$1.2