Join us for "Own Your AI" night on 10/1 in SF featuring Meta, Uber, Upwork, and AWS. Register here

Fine Tuning

Tune model quality, speed, and costs to your use case

Customize open models using the most powerful optimization and tuning techniques

fine tuning
Supervised Fine Tuning

Fine tune with your own data

Customize model behavior by fine-tuning with your own data. Fireworks makes supervised fine-tuning fast, reliable, and cost-effective with an optimized training stack. Train large, state-of-the-art models using advanced methods like quantization-aware training to achieve ideal results

fine tuning
Multi LORA

Serve personalized models at scale

Deploy hundreds of fine-tuned models without added infra or costs. With Multi-LoRA, you can deploy a fine-tuned model for every customer and task—perfect for personalizing quality for B2B interactions. One-click deployment. Fully orchestrated

Reinforcement Fine Tuning

Improve model quality with reinforcement fine tuning

Train open models with your own Python evaluator. Fireworks handles the rest—so you get the highest quality model for your use case.

tune

Train expert open models with just a few examples

RFT lets open models match frontier quality up to 10× faster, with just an evaluator and a few examples

rft

Frontier model quality across key use cases

Use RFT to train models for accurate function calls, clean code, stronger creative writing, and 90%+ math accuracy

reward

Build Custom Reward Functions

Reward-kit lets you score outputs your way with custom evaluators or ready-made examples

Start building today

Instantly run popular and specialized models.