Fireworks AI raises $250M Series C to power the future of enterprise AI. Read more

Model Library
/Databricks/DBRX Instruct
Databricks Logo Mark

DBRX Instruct

Ready
fireworks/dbrx-instruct

    DBRX Instruct is a 132B parameter mixture-of-experts (MoE) large language model developed by Databricks. Specializing in few-turn interactions, it is an instruction fine-tuned version of DBRX Base. The transformer-based, decoder-only model is trained on 12 trillion tokens of text and code data. Utilizing a fine-grained MoE architecture, it activates 36B parameters per input, enhancing model quality. It supports a context length of up to 32K tokens and incorporates advanced techniques like rotary position encodings, gated linear units, and grouped query attention.

    DBRX Instruct API Features

    On-demand Deployment

    Docs

    On-demand deployments give you dedicated GPUs for DBRX Instruct using Fireworks' reliable, high-performance system with no rate limits.

    Metadata

    State
    Ready
    Created on
    3/29/2024
    Kind
    Base model
    Provider
    Databricks
    Hugging Face
    dbrx-instruct

    Specification

    Calibrated
    No
    Mixture-of-Experts
    Yes
    Parameters
    132B

    Supported Functionality

    Fine-tuning
    Not supported
    Serverless
    Not supported
    Serverless LoRA
    Not supported
    Context Length
    32.8k tokens
    Function Calling
    Not supported
    Embeddings
    Not supported
    Rerankers
    Not supported
    Support image input
    Not supported