OpenAI gpt-oss-120b & 20b, open weight models designed for reasoning, agentic tasks, and versatile developer use cases is now available! Try Now

Deepseek Logo Mark

DeepSeek V3.1

DeepSeek-V3.1 is post-trained on the top of DeepSeek-V3.1-Base, which is built upon the original V3 base checkpoint through a two-phase long context extension approach, following the methodology outlined in the original DeepSeek-V3 report. We have expanded our dataset by collecting additional long documents and substantially extending both training phases. The 32K extension phase has been increased 10-fold to 630B tokens, while the 128K extension phase has been extended by 3.3x to 209B tokens. Additionally, DeepSeek-V3.1 is trained using the UE8M0 FP8 scale data format to ensure compatibility with microscaling data formats.

Try Model

Fireworks Features

Fine-tuning

DeepSeek V3.1 can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model

Learn More

Serverless

Immediately run model on pre-configured GPUs and pay-per-token

Learn More

On-demand Deployment

On-demand deployments give you dedicated GPUs for DeepSeek V3.1 using Fireworks' reliable, high-performance system with no rate limits.

Learn More

Info

Provider

Deepseek

Model Type

LLM

Context Length

163840

Serverless

Available

Fine-Tuning

Available

Pricing Per 1M Tokens

$1.2