GLM 5 is now live on Fireworks. Try It Today.

Model Library
/Deepseek/Deepseek V3 03-24
Deepseek Logo Mark

Deepseek V3 03-24

Ready
fireworks/deepseek-v3-0324

    A strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token from Deepseek. Updated checkpoint. Note that fine-tuning for this model is only available upon request through contacting fireworks at https://fireworks.ai/company/contact-us.

    Deepseek V3 03-24 API Features

    Fine-tuning

    Docs

    Deepseek V3 03-24 can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model

    On-demand Deployment

    Docs

    On-demand deployments give you dedicated GPUs for Deepseek V3 03-24 using Fireworks' reliable, high-performance system with no rate limits.

    Metadata

    State
    Ready
    Created on
    3/24/2025
    Kind
    Base model
    Provider
    Deepseek
    Hugging Face
    DeepSeek-V3-0324

    Specification

    Calibrated
    Yes
    Mixture-of-Experts
    Yes
    Parameters
    671B

    Supported Functionality

    Fine-tuning
    Supported
    Serverless
    Not supported
    Context Length
    163.8k tokens
    Function Calling
    Supported
    Embeddings
    Not supported
    Rerankers
    Not supported
    Support image input
    Not supported