DeepSeek V3.1, a new state of the art open weight models for agentic reasoning, tool use, and coding, is now available! Try Now

MiniMax-M1-80k

We introduce MiniMax-M1, the world's first open-weight, large-scale hybrid-attention reasoning model. MiniMax-M1 is powered by a hybrid Mixture-of-Experts (MoE) architecture combined with a lightning attention mechanism. The model is developed based on our previous MiniMax-Text-01 model, which contains a total of 456 billion parameters with 45.9 billion parameters activated per token.

Try Model

Fireworks Features

On-demand Deployment

On-demand deployments give you dedicated GPUs for MiniMax-M1-80k using Fireworks' reliable, high-performance system with no rate limits.

Learn More

Info

Model Type

LLM

Pricing Per 1M Tokens

$0.1