An improved, potentially even perfected variant of MythoMix, a MythoLogic-L2 and Huginn merge using a highly experimental tensor type merge technique. Proficient at both storytelling and roleplaying due to its unique nature.
Fine-tuningDocs | MythoMax L2 13B can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model |
On-demand DeploymentDocs | On-demand deployments give you dedicated GPUs for MythoMax L2 13B using Fireworks' reliable, high-performance system with no rate limits. |
MythoMax L2 13B is a 13 billion parameter language model developed by Gryphe. It is a merge of MythoLogic-L2 and Huginn, created using a custom tensor-ratio mixing technique that enhances coherence across the model. The result is a model optimized for storytelling, character roleplay, and long-form generation.
This model is optimized for:
The model supports a context length of 4,096 tokens on Fireworks.
The full 4.1K token window is available on on-demand deployments with no rate limits.
Output is constrained by the 4.1K token context window (input + output).
The model has 13 billion parameters.
Yes. Fireworks supports LoRA-based fine-tuning on dedicated infrastructure for this model.
Token usage is based on combined input and output tokens.