Join the Fireworks Startups Program and unlock credits, expert support, and community to scale fast. Join here

OpenOrca Logo

Mistral 7B OpenOrca

A fine-tuned version of Mistral-7B trained on the OpenOrca dataset, based on the dataset generated for Microsoft Research's Orca Paper. Developed using OpenChat packing, and trained using Axolotl.

Try Model

Fireworks Features

Fine-tuning

Mistral 7B OpenOrca can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model

Learn More

On-demand Deployment

On-demand deployments give you dedicated GPUs for Mistral 7B OpenOrca using Fireworks' reliable, high-performance system with no rate limits.

Learn More

Info & Pricing

Provider

OpenOrca

Model Type

LLM

Context Length

32768

Fine-Tuning

Available

Pricing Per 1M Tokens

$0.2