DeepSeek R1 0528, an updated version of the state-of-the-art DeepSeek R1 model, is now available. Try it now!

Databricks Logo Mark

DBRX Instruct

DBRX Instruct is a 132B parameter mixture-of-experts (MoE) large language model developed by Databricks. Specializing in few-turn interactions, it is an instruction fine-tuned version of DBRX Base. The transformer-based, decoder-only model is trained on 12 trillion tokens of text and code data. Utilizing a fine-grained MoE architecture, it activates 36B parameters per input, enhancing model quality. It supports a context length of up to 32K tokens and incorporates advanced techniques like rotary position encodings, gated linear units, and grouped query attention.

Try Model

Fireworks Features

On-demand Deployment

On-demand deployments give you dedicated GPUs for DBRX Instruct using Fireworks' reliable, high-performance system with no rate limits.

Learn More

Info

Provider

Databricks

Model Type

LLM

Context Length

32768

Pricing Per 1M Tokens

$1.2