Fireworks AI raises $250M Series C to power the future of enterprise AI. Read more

Model Library
/Hugging Face/Zephyr 7B Beta
Hugging Face Logo Mark

Zephyr 7B Beta

Ready
fireworks/zephyr-7b-beta

    Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr-7B-β is the second model in the series, and is a fine-tuned version of mistralai/Mistral-7B-v0.1 that was trained on on a mix of publicly available, synthetic datasets using Direct Preference Optimization (DPO).

    Fireworks Features

    Fine-tuning

    Docs

    Zephyr 7B Beta can be customized with your data to improve responses. Fireworks uses LoRA to efficiently train and deploy your personalized model

    On-demand Deployment

    Docs

    On-demand deployments give you dedicated GPUs for Zephyr 7B Beta using Fireworks' reliable, high-performance system with no rate limits.

    Metadata

    State
    Ready
    Created on
    11/1/2023
    Kind
    Base model
    Provider
    Hugging Face
    Hugging Face
    zephyr-7b-beta

    Specification

    Calibrated
    No
    Mixture-of-Experts
    No
    Parameters
    7.2B

    Supported Functionality

    Fine-tuning
    Supported
    Serverless
    Not supported
    Serverless LoRA
    Supported
    Context Length
    32.8k tokens
    Function Calling
    Not supported
    Embeddings
    Not supported
    Rerankers
    Not supported
    Support image input
    Not supported