Kimi K2 is a state-of-the-art mixture-of-experts (MoE) language model with 32 billion activated parameters and 1 trillion total parameters. Trained with the Muon optimizer, Kimi K2 achieves exceptional performance across frontier knowledge, reasoning, and coding tasks while being meticulously optimized for agentic capabilities.
Kimi K2 Instruct can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model
Immediately run model on pre-configured GPUs and pay-per-token
On-demand deployments give you dedicated GPUs for Kimi K2 Instruct using Fireworks' reliable, high-performance system with no rate limits.
131072
Available
Available
$0.6 / $2.5