A wild 7B parameter model that merges several models using the new task_arithmetic merge method from mergekit.
Fine-tuningDocs | Toppy M 7B can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model |
On-demand DeploymentDocs | On-demand deployments give you dedicated GPUs for Toppy M 7B using Fireworks' reliable, high-performance system with no rate limits. |