DeepSeek V4 Pro is Live → Try it now.

Model Library
/Deepseek/DeepSeek-V4-Pro
accounts/fireworks/models/deepseek-v4-pro

    DeepSeek-V4-Pro is a flagship open-source Mixture-of-Experts model designed for frontier reasoning, advanced coding, and long-context intelligence at scale (up to 1M tokens). It introduces a hybrid attention architecture that dramatically improves long-context efficiency while reducing KV and compute overhead, along with stability and training enhancements for deep multi-step reasoning. It represents a top-tier open-source system for complex agentic workflows, high-precision reasoning, and demanding production workloads.

    DeepSeek-V4-Pro API Features

    Serverless

    Docs

    DeepSeek-V4-Pro is available via Fireworks' serverless API, where you pay per token. There are several ways to call the Fireworks API, including Fireworks' Python client, the REST API, or OpenAI's Python client.

    On-demand Deployment

    Docs

    On-demand deployments allow you to use DeepSeek-V4-Pro on dedicated GPUs with Fireworks' high-performance serving stack with high reliability and no rate limits.

    Available Serverless

    Run queries immediately, pay only for usage

    $1.74 / $0.14 / $3.48
    Per 1M Tokens (input/cached input/output)

    Metadata

    State
    Ready
    Created on
    4/24/2026
    Kind
    Base model
    Provider
    Deepseek

    Specification

    Calibrated
    No
    Mixture-of-Experts
    Yes
    Parameters
    1.6T

    Supported Functionality

    Fine-tuning
    Not supported
    Serverless
    Supported
    Context Length
    1040k tokens
    Function Calling
    Supported
    Embeddings
    Not supported
    Rerankers
    Not supported
    Support image input
    Not supported