We introduce MiniMax-M1, the world's first open-weight, large-scale hybrid-attention reasoning model. MiniMax-M1 is powered by a hybrid Mixture-of-Experts (MoE) architecture combined with a lightning attention mechanism. The model is developed based on our previous MiniMax-Text-01 model, which contains a total of 456 billion parameters with 45.9 billion parameters activated per token.
On-demand deployments give you dedicated GPUs for MiniMax-M1-80k using Fireworks' reliable, high-performance system with no rate limits.
Learn More$0.1