OpenAI gpt-oss-120b & 20b, open weight models designed for reasoning, agentic tasks, and versatile developer use cases is now available! Try Now

Mous Research Logo Mark

Nouse Hermes 2 Mixtral 8x7B DPO

Nous Hermes 2 Mixtral 8x7B DPO is the new flagship Nous Research model trained over the Mixtral 8x7B MoE LLM. The model was trained on over 1,000,000 entries of primarily GPT-4 generated data, as well as other high quality data from open datasets across the AI landscape, achieving state of the art performance on a variety of tasks.

Try Model

Fireworks Features

On-demand Deployment

On-demand deployments give you dedicated GPUs for Nouse Hermes 2 Mixtral 8x7B DPO using Fireworks' reliable, high-performance system with no rate limits.

Learn More

Info

Provider

Nous

Model Type

LLM

Context Length

32768

Pricing Per 1M Tokens

$0.5