A fine-tuned version of Mistral-7B trained on the OpenOrca dataset, based on the dataset generated for Microsoft Research's Orca Paper. Developed using OpenChat packing, and trained using Axolotl.
Mistral 7B OpenOrca can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model
On-demand deployments give you dedicated GPUs for Mistral 7B OpenOrca using Fireworks' reliable, high-performance system with no rate limits.
OpenOrca
32768
Available
$0.2