Try the latest GLM-4.6 with extended context, superior coding, and refined intelligence. Now available on-demand

Model Library
/DeepSeek Coder V2 Instruct

DeepSeek Coder V2 Instruct

Ready
fireworks/deepseek-coder-v2-instruct

    DeepSeek Coder V2 Instruct is a 236-billion-parameter open-source Mixture-of-Experts (MoE) code language model with 21 billion active parameters, developed by DeepSeek AI. Fine-tuned for instruction following, it achieves performance comparable to GPT4-Turbo on code-specific tasks. Pre-trained on an additional 6 trillion tokens, it enhances coding and mathematical reasoning capabilities, supports 338 programming languages, and extends context length from 16K to 128K while maintaining strong general language performance.

    Fireworks Features

    Fine-tuning

    Docs

    DeepSeek Coder V2 Instruct can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model

    On-demand Deployment

    Docs

    On-demand deployments give you dedicated GPUs for DeepSeek Coder V2 Instruct using Fireworks' reliable, high-performance system with no rate limits.

    Metadata

    State
    Ready
    Created on
    7/11/2024
    Kind
    Base model
    Provider
    N/A
    Hugging Face
    deepseek-coder-v2-instruct

    Specification

    Calibrated
    No
    Mixture-of-Experts
    No
    Parameters
    236B

    Supported Functionality

    Fine-tuning
    Supported
    Serverless
    Not supported
    Serverless LoRA
    Not supported
    Context Length
    131.1k tokens
    Function Calling
    Not supported
    Embeddings
    Not supported
    Rerankers
    Not supported
    Support image input
    Not supported