Kimi K2 0905, a new state of the art open weight models for agentic reasoning, tool use, and coding, is now available! Try Now

Mistral Logo Icon

Mixtral 8x7B v0.1

Mixtral 8x7B v0.1 is a sparse mixture-of-experts (SMoE) large language model developed by Mistral AI. With 46.7 billion total parameters and 12.9 billion active parameters per token, it outperforms Llama 2 70B and matches GPT-3.5 on many benchmarks while offering efficient inference. The model handles context lengths up to 32k tokens, supports multiple languages including English, French, Italian, German, and Spanish, and excels in code generation tasks. Licensed under Apache 2.0, Mixtral provides a powerful and efficient solution for diverse NLP applications.

Try Model

Fireworks Features

On-demand Deployment

On-demand deployments give you dedicated GPUs for Mixtral 8x7B v0.1 using Fireworks' reliable, high-performance system with no rate limits.

Learn More

Info & Pricing

Provider

Mistral

Model Type

LLM

Context Length

32768

Pricing Per 1M Tokens

$0.5