KAT-Dev-32B is an open-source 32B-parameter model for software engineering tasks. It is optimized via several stages of training, including a mid-training stage, supervised fine-tuning (SFT) & reinforcement fine-tuning (RFT) stage and an large-scale agentic reinforcement learning (RL) stage.
Fine-tuningDocs | KAT Dev 32B can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model |
On-demand DeploymentDocs | On-demand deployments allow you to use KAT Dev 32B on dedicated GPUs with Fireworks' high-performance serving stack with high reliability and no rate limits. |