Fireworks AI raises $250M Series C to power the future of enterprise AI. Read more

Model Library
/MiniMax/MiniMax-M1-80k
minimax

MiniMax-M1-80k

Ready
fireworks/minimax-m1-80k

    We introduce MiniMax-M1, the world's first open-weight, large-scale hybrid-attention reasoning model. MiniMax-M1 is powered by a hybrid Mixture-of-Experts (MoE) architecture combined with a lightning attention mechanism. The model is developed based on our previous MiniMax-Text-01 model, which contains a total of 456 billion parameters with 45.9 billion parameters activated per token.

    Fireworks Features

    On-demand Deployment

    Docs

    On-demand deployments give you dedicated GPUs for MiniMax-M1-80k using Fireworks' reliable, high-performance system with no rate limits.

    Metadata

    State
    Ready
    Created on
    6/19/2025
    Kind
    Base model
    Provider
    MiniMax
    Hugging Face
    main

    Specification

    Calibrated
    No
    Mixture-of-Experts
    No
    Parameters
    0

    Supported Functionality

    Fine-tuning
    Not supported
    Serverless
    Not supported
    Serverless LoRA
    Not supported
    Context Length
    N/A
    Function Calling
    Not supported
    Embeddings
    Not supported
    Rerankers
    Not supported
    Support image input
    Not supported