Fireworks AI raises $250M Series C to power the future of enterprise AI. Read more

Model Library
/DeepSeek Coder V2 Lite Base

DeepSeek Coder V2 Lite Base

Ready
fireworks/deepseek-coder-v2-lite-base

    DeepSeek-Coder-V2 is an open-source Mixture-of-Experts (MoE) code language model that achieves performance comparable to GPT4-Turbo in code-specific tasks. Specifically, DeepSeek-Coder-V2 is further pre-trained from an intermediate checkpoint of DeepSeek-V2 with additional 6 trillion tokens. DeepSeek-Coder-V2 expands its support for programming languages from 86 to 338, while extending the context length from 16K to 128K.

    DeepSeek Coder V2 Lite Base API Features

    Fine-tuning

    Docs

    DeepSeek Coder V2 Lite Base can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model

    On-demand Deployment

    Docs

    On-demand deployments give you dedicated GPUs for DeepSeek Coder V2 Lite Base using Fireworks' reliable, high-performance system with no rate limits.

    Metadata

    State
    Ready
    Created on
    7/6/2024
    Kind
    Base model
    Provider
    N/A
    Hugging Face
    deepseek-coder-v2-lite-base

    Specification

    Calibrated
    Yes
    Mixture-of-Experts
    Yes
    Parameters
    15.7B

    Supported Functionality

    Fine-tuning
    Supported
    Serverless
    Not supported
    Serverless LoRA
    Not supported
    Context Length
    163.8k tokens
    Function Calling
    Not supported
    Embeddings
    Not supported
    Rerankers
    Not supported
    Support image input
    Not supported