OpenAI gpt-oss-120b & 20b, open weight models designed for reasoning, agentic tasks, and versatile developer use cases is now available! Try Now

minimax

MiniMax-M1-80K API

MiniMax-M1-80k is a powerful open-weight LLM available on Fireworks, built for extreme long-context reasoning with support for up to 1 million tokens. Its hybrid Mixture-of-Experts architecture and lightning attention enable up to 75% lower FLOPs for long generations vs. DeepSeek R1, making it ideal for tasks like multi-file code refactoring, legal document analysis, and agent workflows. Trained via large-scale reinforcement learning, it outperforms other open models on benchmarks like SWE-bench, GPQA, ZebraLogic, TAU-bench, and MRCR. Now live on Fireworks with OpenAI-compatible APIs, function calling, and vLLM-optimized serving—ready for production.

Try Model

Info

Provider

MiniMax

Model Type

LLM

Context Length

1000000

Pricing Per 1M Tokens

No Pricing Available