DeepSeek V4 Pro is Live → Try it now.

Model Library
/NVIDIA/NVIDIA Nemotron 3 Super 120B A12B FP8
NVIDIA icon

NVIDIA Nemotron 3 Super 120B A12B FP8

Ready
accounts/fireworks/models/nvidia-nemotron-3-super-120b-a12b-fp8

    Nemotron-3-Super-120B-A12B-FP8 is a large language model trained by NVIDIA for agentic, reasoning, and conversational tasks. It uses a hybrid Latent MoE architecture with interleaved Mamba-2 and MoE layers, plus Multi-Token Prediction (MTP) for faster generation. 12B active / 120B total parameters. Supports English, French, German, Italian, Japanese, Spanish, and Chinese.

    NVIDIA Nemotron 3 Super 120B A12B FP8 API Features

    On-demand Deployment

    Docs

    On-demand deployments allow you to use NVIDIA Nemotron 3 Super 120B A12B FP8 on dedicated GPUs with Fireworks' high-performance serving stack with high reliability and no rate limits.

    Metadata

    State
    Ready
    Created on
    3/11/2026
    Kind
    Base model
    Provider
    NVIDIA

    Specification

    Calibrated
    No
    Mixture-of-Experts
    No
    Parameters
    120B

    Supported Functionality

    Fine-tuning
    Not supported
    Serverless
    Not supported
    Context Length
    262k tokens
    Function Calling
    Supported
    Embeddings
    Not supported
    Rerankers
    Not supported
    Support image input
    Not supported