MiniMax-M1-80k is a powerful open-weight LLM available on Fireworks, built for extreme long-context reasoning with support for up to 1 million tokens. Its hybrid Mixture-of-Experts architecture and lightning attention enable up to 75% lower FLOPs for long generations vs. DeepSeek R1, making it ideal for tasks like multi-file code refactoring, legal document analysis, and agent workflows. Trained via large-scale reinforcement learning, it outperforms other open models on benchmarks like SWE-bench, GPQA, ZebraLogic, TAU-bench, and MRCR. Now live on Fireworks with OpenAI-compatible APIs, function calling, and vLLM-optimized serving—ready for production.
MiniMax
1000000
No Pricing Available