Fireworks RFT now available! Fine-tune open models that outperform frontier models. Try today

Model Library
/Mistral/Mistral Large 3 675B Instruct 2512
Mistral Logo Icon

Mistral Large 3 675B Instruct 2512

Ready
fireworks/mistral-large-3-fp8

    Mistral Large 3 is a state-of-the-art general-purpose Multimodal granular Mixture-of-Experts model with 41B active parameters and 675B total parameters trained from the ground up with 3000 H200s. This model is the instruct post-trained version in FP8, fine-tuned for instruction tasks, making it ideal for chat, agentic and instruction based use cases. Designed for reliability and long-context comprehension - It is engineered for production-grade assistants, retrieval-augmented systems, scientific workloads, and complex enterprise workflows.

    Mistral Large 3 675B Instruct 2512 API Features

    On-demand Deployment

    Docs

    On-demand deployments give you dedicated GPUs for Mistral Large 3 675B Instruct 2512 using Fireworks' reliable, high-performance system with no rate limits.

    Metadata

    State
    Ready
    Created on
    12/2/2025
    Kind
    Base model
    Provider
    Mistral
    Hugging Face
    Mistral-Large-3-675B-Instruct-2512

    Specification

    Calibrated
    No
    Mixture-of-Experts
    Yes
    Parameters
    675B

    Supported Functionality

    Fine-tuning
    Not supported
    Serverless
    Not supported
    Serverless LoRA
    Not supported
    Context Length
    256k tokens
    Function Calling
    Supported
    Embeddings
    Not supported
    Rerankers
    Not supported
    Support image input
    Supported