LLM
A fine-tuned version of Mistral-7B trained on the OpenOrca dataset, based on the dataset generated for Microsoft Research's Orca Paper. Developed using OpenChat packing, and trained using Axolotl.
On-demand deployments allow you to use Mistral 7B OpenOrca on dedicated GPUs with Fireworks' high-performance serving stack with high reliability and no rate limits.
See the On-demand deployments guide for details.