
Enabling Function Calling in DeepSeek v3: Bridging the Gap Between Text and Action
By Fireworks AI|2/14/2025
DeepSeek R1, a state-of-the-art open model, is now available. Try it now or read our DeepSeek quickstart!
By Fireworks AI|2/14/2025
Large language models (LLMs) have revolutionized natural language processing by generating impressive text based on massive pretraining and strategic alignment with user preferences during post training. However, their inherent limitation is thatâwhile they excel at generating human-like languageâthey lack the ability to access or update real-world information on demand. This is where function (or tool) calling comes into play. By enabling LLMs to invoke external functions or APIs, we can dynamically extend their capabilities, making them not only great conversationalists but also powerful, interactive agents.
We are thrilled to announce that Fireworks AI API now supports function calling on top of the latest generation DeepSeek V3 model.
Function calling refers to the process by which an LLM detects that a user request requires external data or action and then produces a structured output (typically in JSON) that specifies which function to call along with the necessary arguments. For example, instead of simply generating text to answer âWhat is the weather in London?â an LLM equipped with function calling can output a JSON object that triggers a weather API call. Once the external tool returns the relevant data, the LLM integrates this information into its final response.
This paradigm is sometimes also called tool calling, and it fundamentally transforms LLMs from static knowledge generators into dynamic, interactive agents capable of realâworld tasks.
At its core, function calling involves the following key steps:
Tool Specification and Prompting:
Developers define a set of external functionsâeach with a name, description, and a JSON schema for its parameters. For example, a weather retrieval function might be specified with parameters such as location and temperature unit. The LLM is then prompted with both the user query and the tool definitions.
By passing in the tool definitions as part of the prompt context, the model learns to generate structured calls when it identifies that a user query requires external data.
Detecting and Generating Function Calls:
When the LLM processes a user query, it decides whether to answer directly or issue a function call. If the latter is chosen, the model outputs a JSON string with the name of the function and the relevant arguments. This output does not execute the functionâit merely indicates what external call should be made.
The ability to output a function call in a structured format is critical; it lets developers safely and reliably integrate external APIs into the LLMâs workflow.
Function Execution and Feedback Loop:
An external system or middleware detects the structured function call, executes the specified function (e.g., calls a weather API), and retrieves the result. This result is then fed back into the conversation context for the LLM to generate a comprehensive answer. In many implementations, a second round of prompting uses both the original query and the functionâs output to produce the final response.
This two-step processâfirst generating the function call, then using the result to refine the final outputâforms the backbone of interactive LLM systems.
The ability to call functions extends LLMsâ applicability into numerous domains:
Real-Time Data Retrieval:
LLMs can fetch up-to-date information such as weather forecasts, stock prices, or news updates, overcoming the limitations of static pretraining data.
Task Automation and Workflow Integration:
By invoking functions, LLMs can perform tasks like scheduling meetings, managing databases, or even controlling IoT devices, effectively operating as autonomous agents.
We are excited to announce that the Fireworks AI API now offers function calling capabilities integrated with the latest DeepSeek v3 model. This enhancement enables developers to create applications where the model can interact with external functions or APIs, thereby extending its capabilities beyond static responses.
import json
import requests
# Define available tools/functions that the model can use
# In this case, we have a single weather function that can fetch weather data
tools = [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Fetch the current weather in a given location.",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City name, e.g., London",
},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
},
"required": ["location"],
},
},
}
]
# API configuration
API_URL = "https://api.fireworks.ai/inference/v1/chat/completions"
FW_API_KEY = "get your key from https://fireworks.ai/"
MODEL = "accounts/fireworks/models/deepseek-v3"
# Set up request headers with API key authentication
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {FW_API_KEY}",
}
# Define the conversation messages - in this case a simple weather query
messages = [{"role": "user", "content": "What is the weather like in London?"}]
# Prepare the request payload with the model, messages and available tools
payload = {
"model": MODEL,
"messages": messages,
"tools": tools,
}
# Send POST request to the API and parse the JSON response
response = requests.post(API_URL, headers=headers, data=json.dumps(payload))
parsed_response = response.json()
# Print the first choice's message from the response with nice formatting
print(json.dumps(parsed_response["choices"][0]["message"], indent=2))
# Output:
#
# {
# "role": "assistant",
# "tool_calls": [
# {
# "index": 0,
# "id": "call_5ZPGrqI8bEs7dgccRnnFHEZr",
# "type": "function",
# "function": {
# "name": "get_current_weather",
# "arguments": "{\"location\": \"London\", \"unit\": \"celsius\"}"
# }
# }
# ]
# }
import json
import requests
# Define available tools/functions that the model can use
# In this case, we have a weather function that fetches current weather data
tools = [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Fetch the current weather in a given location.",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City name, e.g., London",
},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
},
"required": ["location"],
},
},
}
]
# API configuration
API_URL = "https://api.fireworks.ai/inference/v1/chat/completions"
FW_API_KEY = "get your key from https://fireworks.ai/"
MODEL = "accounts/fireworks/models/deepseek-v3"
# Set up request headers with API key authentication
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {FW_API_KEY}",
}
# Define the conversation messages - in this case a simple weather query
messages = [{"role": "user", "content": "What is the weather like in London?"}]
# Prepare the request payload with streaming enabled
payload = {
"model": MODEL,
"messages": messages,
"tools": tools,
"stream": True,
}
# Send POST request to the API with streaming enabled
response = requests.post(
API_URL, headers=headers, data=json.dumps(payload), stream=True
)
# Dictionary to store accumulated function calls from the stream
function_calls = {}
# Process the streaming response line by line
for line in response.iter_lines():
if line:
# Decode and clean up the line
line = line.decode("utf-8").strip()
if line == "data: [DONE]":
break
if line.startswith("data: "):
line = line[len("data: ") :]
# Parse the JSON chunk
chunk = json.loads(line)
if "choices" in chunk:
delta = chunk["choices"][0].get("delta", {})
# Handle tool/function calls in the response
if "tool_calls" in delta and delta["tool_calls"]:
print(f"Chunk: {delta['tool_calls']}")
for tool_call in delta["tool_calls"]:
tool_id = tool_call["id"]
# Initialize new function call entry if needed
if tool_id not in function_calls:
function_calls[tool_id] = {"name": "", "arguments": ""}
# Update function call information from the chunk
if "function" in tool_call:
if "name" in tool_call["function"]:
function_calls[tool_id]["name"] = tool_call["function"][
"name"
]
if "arguments" in tool_call["function"]:
function_calls[tool_id]["arguments"] += tool_call[
"function"
]["arguments"]
# Print the final accumulated function calls
print("\nAll function calls:")
for tool_id, call_info in function_calls.items():
print(f"\nFunction ID: {tool_id}")
print(f"Function name: {call_info['name']}")
print(f"Function arguments: {call_info['arguments']}")
# Output:
#
# Chunk: [{'index': 0, 'id': 'call_RWQqHun4DfN1VVTg6qZYfeJ1', 'type': 'function', 'function': {'name': 'get_current_weather', 'arguments': ''}}]
# Chunk: [{'index': 0, 'id': 'call_RWQqHun4DfN1VVTg6qZYfeJ1', 'type': 'function', 'function': {'arguments': '{"'}}]
# Chunk: [{'index': 0, 'id': 'call_RWQqHun4DfN1VVTg6qZYfeJ1', 'type': 'function', 'function': {'arguments': 'location'}}]
# Chunk: [{'index': 0, 'id': 'call_RWQqHun4DfN1VVTg6qZYfeJ1', 'type': 'function', 'function': {'arguments': '":"'}}]
# Chunk: [{'index': 0, 'id': 'call_RWQqHun4DfN1VVTg6qZYfeJ1', 'type': 'function', 'function': {'arguments': 'London'}}]
# Chunk: [{'index': 0, 'id': 'call_RWQqHun4DfN1VVTg6qZYfeJ1', 'type': 'function', 'function': {'arguments': '","'}}]
# Chunk: [{'index': 0, 'id': 'call_RWQqHun4DfN1VVTg6qZYfeJ1', 'type': 'function', 'function': {'arguments': 'unit'}}]
# Chunk: [{'index': 0, 'id': 'call_RWQqHun4DfN1VVTg6qZYfeJ1', 'type': 'function', 'function': {'arguments': '":"'}}]
# Chunk: [{'index': 0, 'id': 'call_RWQqHun4DfN1VVTg6qZYfeJ1', 'type': 'function', 'function': {'arguments': 'c'}}]
# Chunk: [{'index': 0, 'id': 'call_RWQqHun4DfN1VVTg6qZYfeJ1', 'type': 'function', 'function': {'arguments': 'elsius'}}]
# Chunk: [{'index': 0, 'id': 'call_RWQqHun4DfN1VVTg6qZYfeJ1', 'type': 'function', 'function': {'arguments': '"}\n'}}]
# Chunk: [{'index': 0, 'id': 'call_RWQqHun4DfN1VVTg6qZYfeJ1', 'type': 'function', 'function': {'arguments': ''}}]
#
# All function calls:
#
# Function ID: call_RWQqHun4DfN1VVTg6qZYfeJ1
# Function name: get_current_weather
# Function arguments: {"location":"London","unit":"celsius"}
DeepSeek has never released a complete official template for tool formatting in the DeepSeek v3 model. We did our best to deduce it from info available in the community. We will keep updating the template as new information appears.
One issue we discovered during testing is that the model is not great at multi-turn function calling. It performs best in scenarios where a single user message triggers (potentially multiple) function call(s).
The function calling feature is currently enabled on the Serverless offering only for now. If you want it activated for your On Demand or Reserved instances, please reach out to us.
Find the detailed documentation for function calling here.
đĄ Try out DeepSeek models on Fireworks AI Model Library