Excited to announce that Fireworks Training is now in preview. Train and deploy frontier models on one platform. Learn more

Model Library
/MiniMax/MiniMax M2.7
fireworks/minimax-m2p7

    Mixture-of-Experts language model. M2.7 is capable of building complex agent harnesses and completing highly elaborate productivity tasks, leveraging Agent Teams, complex Skills, and dynamic tool search.

    MiniMax M2.7 API Features

    Fine-tuning

    Docs

    MiniMax M2.7 can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model

    Serverless

    Docs

    Immediately run model on pre-configured GPUs and pay-per-token

    On-demand Deployment

    Docs

    On-demand deployments give you dedicated GPUs for MiniMax M2.7 using Fireworks' reliable, high-performance system with no rate limits.

    Available Serverless

    Run queries immediately, pay only for usage

    $0.30 / $0.06 / $1.20
    Per 1M Tokens (input/cached input/output)

    Metadata

    State
    Ready
    Created on
    4/12/2026
    Kind
    Base model
    Provider
    MiniMax

    Specification

    Calibrated
    No
    Mixture-of-Experts
    Yes
    Parameters
    228B

    Supported Functionality

    Fine-tuning
    Supported
    Serverless
    Supported
    Context Length
    204.8k tokens
    Function Calling
    Supported
    Embeddings
    Not supported
    Rerankers
    Not supported
    Support image input
    Supported