Fireworks AI raises $250M Series C to power the future of enterprise AI. Read more

Model Library
/DeepSeek Coder V2 Lite Instruct

DeepSeek Coder V2 Lite Instruct

Ready
fireworks/deepseek-coder-v2-lite-instruct

    DeepSeek Coder V2 Lite Instruct is a 16-billion-parameter open-source Mixture-of-Experts (MoE) code language model with 2.4 billion active parameters, developed by DeepSeek AI. Fine-tuned for instruction following, it achieves performance comparable to GPT4-Turbo on code-specific tasks. Pre-trained on an additional 6 trillion tokens, it enhances coding and mathematical reasoning capabilities, supports 338 programming languages, and extends context length from 16K to 128K while maintaining strong general language performance.

    DeepSeek Coder V2 Lite Instruct API Features

    Fine-tuning

    Docs

    DeepSeek Coder V2 Lite Instruct can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model

    On-demand Deployment

    Docs

    On-demand deployments give you dedicated GPUs for DeepSeek Coder V2 Lite Instruct using Fireworks' reliable, high-performance system with no rate limits.

    DeepSeek Coder V2 Lite Instruct FAQs

    Metadata

    State
    Ready
    Created on
    7/4/2024
    Kind
    Base model
    Provider
    N/A
    Hugging Face
    deepseek-coder-v2-lite-instruct

    Specification

    Calibrated
    Yes
    Mixture-of-Experts
    Yes
    Parameters
    15.7B

    Supported Functionality

    Fine-tuning
    Supported
    Serverless
    Not supported
    Serverless LoRA
    Not supported
    Context Length
    163.8k tokens
    Function Calling
    Not supported
    Embeddings
    Not supported
    Rerankers
    Not supported
    Support image input
    Not supported