
Nemotron-3-Super-120B-A12B-FP8 is a large language model trained by NVIDIA for agentic, reasoning, and conversational tasks. It uses a hybrid Latent MoE architecture with interleaved Mamba-2 and MoE layers, plus Multi-Token Prediction (MTP) for faster generation. 12B active / 120B total parameters. Supports English, French, German, Italian, Japanese, Spanish, and Chinese.
On-demand DeploymentDocs | On-demand deployments allow you to use NVIDIA Nemotron 3 Super 120B A12B FP8 on dedicated GPUs with Fireworks' high-performance serving stack with high reliability and no rate limits. |