Fireworks RFT now available! Fine-tune open models that outperform frontier models. Try today

Model Library
/Gryphe/MythoMax L2 13B
Gryphe Logo Mark

MythoMax L2 13B

Ready
fireworks/mythomax-l2-13b

    An improved, potentially even perfected variant of MythoMix, a MythoLogic-L2 and Huginn merge using a highly experimental tensor type merge technique. Proficient at both storytelling and roleplaying due to its unique nature.

    MythoMax L2 13B API Features

    Fine-tuning

    Docs

    MythoMax L2 13B can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model

    On-demand Deployment

    Docs

    On-demand deployments give you dedicated GPUs for MythoMax L2 13B using Fireworks' reliable, high-performance system with no rate limits.

    MythoMax L2 13B FAQs

    What is MythoMax L2 13B and who developed it?

    MythoMax L2 13B is a 13 billion parameter language model developed by Gryphe. It is a merge of MythoLogic-L2 and Huginn, created using a custom tensor-ratio mixing technique that enhances coherence across the model. The result is a model optimized for storytelling, character roleplay, and long-form generation.

    What applications and use cases does MythoMax L2 13B excel at?

    This model is optimized for:

    • Roleplaying and character-based dialogue
    • Storywriting and creative generation
    • Conversational AI
    • Agentic systems and enterprise RAG
    What is the maximum context length for MythoMax L2 13B?

    The model supports a context length of 4,096 tokens on Fireworks.

    What is the usable context window for MythoMax L2 13B?

    The full 4.1K token window is available on on-demand deployments with no rate limits.

    What is the maximum output length Fireworks allows for MythoMax L2 13B?

    Output is constrained by the 4.1K token context window (input + output).

    What are known failure modes of MythoMax L2 13B?
    • Not safety-aligned: No moderation or filtering is applied; it may generate unsafe or unfiltered content
    • No support for image input, embeddings, or rerankers
    • Function/tool use is not supported
    • Not suitable for applications requiring extended context beyond 4K tokens
    How many parameters does MythoMax L2 13B have?

    The model has 13 billion parameters.

    Is fine-tuning supported for MythoMax L2 13B?

    Yes. Fireworks supports LoRA-based fine-tuning on dedicated infrastructure for this model.

    How are tokens counted (prompt vs completion)?

    Token usage is based on combined input and output tokens.

    What rate limits apply on the shared endpoint?
    • Serverless: Not supported
    • On-demand: Available with no rate limits

    Metadata

    State
    Ready
    Created on
    2/13/2024
    Kind
    Base model
    Provider
    Gryphe
    Hugging Face
    MythoMax-L2-13b

    Specification

    Calibrated
    Yes
    Mixture-of-Experts
    No
    Parameters
    13B

    Supported Functionality

    Fine-tuning
    Supported
    Serverless
    Not supported
    Serverless LoRA
    Supported
    Context Length
    4.1k tokens
    Function Calling
    Not supported
    Embeddings
    Not supported
    Rerankers
    Not supported
    Support image input
    Not supported