Fireworks AI raises $250M Series C to power the future of enterprise AI. Read more

Model Library
/NVIDIA/NVIDIA Nemotron Nano 12B v2

NVIDIA Nemotron Nano 12B v2

Ready
fireworks/nvidia-nemotron-nano-12b-v2

    NVIDIA-Nemotron-Nano-12B-v2 is a large language model (LLM) trained from scratch by NVIDIA, and designed as a unified model for both reasoning and non-reasoning tasks. To disable the reasoning trace, include /no_think in your system prompt.

    Fireworks Features

    On-demand Deployment

    Docs

    On-demand deployments give you dedicated GPUs for NVIDIA Nemotron Nano 12B v2 using Fireworks' reliable, high-performance system with no rate limits.

    Metadata

    State
    Ready
    Created on
    10/15/2025
    Kind
    Base model
    Provider
    NVIDIA
    Hugging Face
    NVIDIA-Nemotron-Nano-12B-v2

    Specification

    Calibrated
    No
    Mixture-of-Experts
    No
    Parameters
    12B

    Supported Functionality

    Fine-tuning
    Not supported
    Serverless
    Not supported
    Serverless LoRA
    Not supported
    Context Length
    131.1k tokens
    Function Calling
    Supported
    Embeddings
    Not supported
    Rerankers
    Not supported
    Support image input
    Not supported