The Qwen3 Embedding 8B model is the latest proprietary model of the Qwen family, specifically designed for text embedding tasks. This model inherits the exceptional multilingual capabilities, long-text understanding, and reasoning skills building upon the dense foundational models of the Qwen3 series. The model represents significant advancements in multiple text embedding tasks including text retrieval, code retrieval, text classification, text clustering.
ServerlessDocs | Immediately run model on pre-configured GPUs and pay-per-token |
On-demand DeploymentDocs | On-demand deployments give you dedicated GPUs for Qwen3 Embedding 8B using Fireworks' reliable, high-performance system with no rate limits. |
Qwen3 Embedding 8B is a proprietary text embedding model developed by Qwen (a sub-brand of Alibaba Group). It is part of the Qwen3 Embedding series and is optimized for multilingual embedding, retrieval, and reranking tasks.
The model is designed for:
The model supports a context length of 32,000 tokens.
The full 32K token context length is usable as supported by the model architecture.
Yes, the model supports 4-bit and 8-bit formats.
The model supports embedding dimensions from 32 to 4096, configurable by the user.
Qwen notes:
transformers versions (<4.51.0) may cause compatibility errorsNo, streaming and function calling are not supported.
The model has 8 billion parameters.
No. Fine-tuning is not supported on Fireworks for this model.
The model is licensed under the Apache 2.0 license, allowing unrestricted commercial use.