Hugging Face integration

This documentation provides a concise guide for developers to integrate and use Fireworks.ai inference capabilities via the Hugging Face ecosystem.

Authentication and Billing

When using Fireworks.ai through Hugging Face, you have two options for authentication:

  • Direct Requests: Use your Fireworks.ai API key in your Hugging Face user account settings. In this mode, inference requests are sent directly to Fireworks.ai, and billing is handled by your Fireworks.ai account.
  • Routed Requests: If you don’t configure a Fireworks.ai API key, your requests will be routed through Hugging Face. In this case, you can use a Hugging Face token for authentication. Billing for routed requests is applied to your Hugging Face account at standard provider API rates. You don’t need an account on Fireworks.ai to do this, just use your HF one!

To add a Fireworks.ai API key to your Hugging Face settings, follow these steps:

  1. Go to your Hugging Face user account settings.
  2. Locate the “Inference Providers” section.
  3. You can add your API keys for different providers, including Fireworks.ai
  4. You can also set your preferred provider order, which will influence the display order in model widgets and code snippets.

You can search for all Fireworks.ai models on the hub and directly try out the available models via the Model Page widget too.

Usage Examples

The examples below demonstrate how to interact with various models using Python and JavaScript.

First, ensure you have the huggingface_hub library installed (version v0.29.0 or later):

pip install huggingface_hub>=0.29.0
  1. Chat Completion (LLMs) with Hugging Face Hub library
from huggingface_hub import InferenceClient

# Initialize the InferenceClient with Fireworks.ai as the provider
client = InferenceClient(
    provider="fireworks-ai", 
    api_key="xxxxxxxxxxxxxxxxxxxxxxxx"  # Replace with your API key (HF or custom)
)

# Define the chat messages
messages = [
    {
        "role": "user",
        "content": "What is the capital of France?"
    }
]

# Generate a chat completion
completion = client.chat.completions.create(
    model="deepseek-ai/DeepSeek-R1",  
    messages=messages, 
    max_tokens=500
)

# Print the response
print(completion.choices[0].message)

You can swap this for any compatible LLM from Fireworks.ai, here’s a handy URL to find the list: here

  1. Vision Language Models (VLMs) with Hugging Face Hub Library
import os
from huggingface_hub import InferenceClient

client = InferenceClient(
    provider="fireworks-ai",
    api_key=os.environ["HF_TOKEN"],
)

completion = client.chat.completions.create(
    model="Qwen/Qwen2.5-VL-32B-Instruct",
    messages=[
        {
            "role": "user",
            "content": [
                {
                    "type": "text",
                    "text": "Describe this image in one sentence."
                },
                {
                    "type": "image_url",
                    "image_url": {
                        "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"
                    }
                }
            ]
        }
    ],
)

print(completion.choices[0].message)

Similar to LLMs, you can use any compatible VLM model from the list here

You can also call inference providers via the OpenAI python client. You will need to specify the base_url and model parameters in the client and call respectively.

The easiest way is to go to a model’s page on the hub and copy the snippet.

from openai import OpenAI

client = OpenAI(
	base_url="https://router.huggingface.co/fireworks-ai/inference/v1",
	api_key="xxxxxxxxxxxxxxxxxxxxxxxx" #fireworks or Hugging Face API key 
)

messages = [
	{
		"role": "user",
		"content": "What is the capital of France?"
	}
]

completion = client.chat.completions.create(
	model="<provider_specific_model_name>", 
	messages=messages, 
	max_tokens=500,
)

print(completion.choices[0].message)
  1. Text-to-Image Generation
import os
from huggingface_hub import InferenceClient

client = InferenceClient(
    provider="fireworks-ai",
    api_key=os.environ["HF_TOKEN"],
)

generated_image = client.text_to_image(
    model="black-forest-labs/FLUX.1-schnell",
    inputs="Bob Marley in the style of a painting by Johannes Vermeer",
    provider="fireworks-ai",
    max_tokens=500
)

You can search for all Fireworks.ai models on the hub and directly try out the available models via the Model Page widget too.

We’ll continue to increase the number of models and ways to try it out!