ServerlessLLMChat
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks.
Llama 3 70B Instruct is available via Fireworks' serverless API, where you pay per token. There are several ways to call the Fireworks API, including Fireworks' Python client, the REST API, or OpenAI's Python client.
See below for easy generation of calls and a description of the raw REST API for making API requests. See the Querying text models docs for details.
Generate a model response using the chat endpoint of llama-v3-70b-instruct. API reference
import requests import json url = "https://api.fireworks.ai/inference/v1/chat/completions" payload = { "model": "accounts/fireworks/models/llama-v3-70b-instruct", "max_tokens": 1024, "top_p": 1, "top_k": 40, "presence_penalty": 0, "frequency_penalty": 0, "temperature": 0.6, "messages": [ { "role": "user", "content": "Hello, how are you?" } ] } headers = { "Accept": "application/json", "Content-Type": "application/json", "Authorization": "Bearer <API_KEY>" } requests.request("POST", url, headers=headers, data=json.dumps(payload))
On-demand deployments allow you to use Llama 3 70B Instruct on dedicated GPUs with Fireworks' high-performance serving stack with high reliability and no rate limits.
See the On-demand deployments guide for details.
Your deployments of are listed below.