Try the latest GLM-4.6 with extended context, superior coding, and refined intelligence. Now available on-demand

Model Library
/Deepseek/DeepSeek V3.1
fireworks/deepseek-v3p1

    DeepSeek-V3.1 is post-trained on the top of DeepSeek-V3.1-Base, which is built upon the original V3 base checkpoint through a two-phase long context extension approach, following the methodology outlined in the original DeepSeek-V3 report. We have expanded our dataset by collecting additional long documents and substantially extending both training phases. The 32K extension phase has been increased 10-fold to 630B tokens, while the 128K extension phase has been extended by 3.3x to 209B tokens. Additionally, DeepSeek-V3.1 is trained using the UE8M0 FP8 scale data format to ensure compatibility with microscaling data formats.

    Fireworks Features

    Fine-tuning

    Docs

    DeepSeek V3.1 can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model

    Serverless

    Docs

    Immediately run model on pre-configured GPUs and pay-per-token

    On-demand Deployment

    Docs

    On-demand deployments give you dedicated GPUs for DeepSeek V3.1 using Fireworks' reliable, high-performance system with no rate limits.

    Available Serverless

    Run queries immediately, pay only for usage

    $0.56 / $1.68
    Per 1M Tokens (input/output)

    DeepSeek V3.1 FAQs

    Metadata

    State
    Ready
    Created on
    8/21/2025
    Kind
    Base model
    Provider
    Deepseek
    Hugging Face
    DeepSeek-V3.1

    Specification

    Calibrated
    Yes
    Mixture-of-Experts
    Yes
    Parameters
    671B

    Supported Functionality

    Fine-tuning
    Supported
    Serverless
    Supported
    Serverless LoRA
    Not supported
    Context Length
    163.8k tokens
    Function Calling
    Supported
    Embeddings
    Not supported
    Rerankers
    Not supported
    Support image input
    Not supported