Fireworks RFT now available! Fine-tune open models that outperform frontier models. Try today

Model Library
/Fireworks AI/DeepSeek Coder V2 Lite Instruct
Fireworks Logo Mark

DeepSeek Coder V2 Lite Instruct

Ready
fireworks/deepseek-coder-v2-lite-instruct

    DeepSeek Coder V2 Lite Instruct is a 16-billion-parameter open-source Mixture-of-Experts (MoE) code language model with 2.4 billion active parameters, developed by DeepSeek AI. Fine-tuned for instruction following, it achieves performance comparable to GPT4-Turbo on code-specific tasks. Pre-trained on an additional 6 trillion tokens, it enhances coding and mathematical reasoning capabilities, supports 338 programming languages, and extends context length from 16K to 128K while maintaining strong general language performance.

    DeepSeek Coder V2 Lite Instruct API Features

    Fine-tuning

    Docs

    DeepSeek Coder V2 Lite Instruct can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model

    On-demand Deployment

    Docs

    On-demand deployments give you dedicated GPUs for DeepSeek Coder V2 Lite Instruct using Fireworks' reliable, high-performance system with no rate limits.

    DeepSeek Coder V2 Lite Instruct FAQs

    What is DeepSeek Coder V2 Lite Instruct and who developed it?

    DeepSeek Coder V2 Lite Instruct is a 16B parameter open-source Mixture-of-Experts (MoE) code language model developed by DeepSeek AI. It uses 2.4B active parameters and is fine-tuned for instruction following. It was built upon DeepSeek V2 and further pre-trained with an additional 6 trillion tokens for enhanced coding and math reasoning capabilities.

    What applications and use cases does DeepSeek Coder V2 Lite Instruct excel at?

    The model is optimized for:

    • Code generation and completion
    • Mathematical reasoning
    • Instruction following

    It supports 338 programming languages and performs competitively with GPT-4 Turbo on code-specific benchmarks.

    What is the maximum context length for DeepSeek Coder V2 Lite Instruct?

    The maximum context length for this model is 128K tokens.

    Does DeepSeek Coder V2 Lite Instruct support streaming responses and function-calling schemas?

    No, function calling is not supported for this model.

    How many parameters does DeepSeek Coder V2 Lite Instruct have?

    The model has 16B total parameters with 2.4B active parameters (Mixture-of-Experts).

    Is fine-tuning supported for DeepSeek Coder V2 Lite Instruct?

    Yes. Fireworks supports LoRA-based fine-tuning for this model.

    What rate limits apply on the shared endpoint?

    No rate limits apply for on-demand deployments of this model on Fireworks.

    What license governs commercial use of DeepSeek Coder V2 Lite Instruct?

    The model is licensed under the DeepSeek Model License, with the codebase under MIT License, and commercial use is permitted under the Model License.

    Metadata

    State
    Ready
    Created on
    7/4/2024
    Kind
    Base model
    Provider
    Fireworks AI
    Hugging Face
    deepseek-coder-v2-lite-instruct

    Specification

    Calibrated
    Yes
    Mixture-of-Experts
    Yes
    Parameters
    15.7B

    Supported Functionality

    Fine-tuning
    Supported
    Serverless
    Not supported
    Serverless LoRA
    Not supported
    Context Length
    163.8k tokens
    Function Calling
    Not supported
    Embeddings
    Not supported
    Rerankers
    Not supported
    Support image input
    Not supported