ServerlessImageLLMChat
Instruction-tuned image reasoning model with 90B parameters from Meta. Optimized for visual recognition, image reasoning, captioning, and answering general questions about an image. The model can understand visual data, such as charts and graphs and also bridge the gap between vision and language by generating text to describe images details Note: This mode is served experimentally as a serverless model. If you're deploying in production, be aware that Fireworks may undeploy the model with short notice.
Llama 3.2 90B Vision Instruct is available via Fireworks' serverless API, where you pay per token. There are several ways to call the Fireworks API, including Fireworks' Python client, the REST API, or OpenAI's Python client.
See below for easy generation of calls and a description of the raw REST API for making API requests. See the Querying text models docs for details.
Generate a model response using the image endpoint of llama-v3p2-90b-vision-instruct. API reference
# pip install 'fireworks-ai' import fireworks.client from fireworks.client.image import ImageInference, Answer # Initialize the ImageInference client fireworks.client.api_key = "$API_KEY" inference_client = ImageInference(model="accounts/fireworks/models/llama-v3p2-90b-vision-instruct") # Generate an image using the text_to_image method answer : Answer = inference_client.text_to_image( prompt="A beautiful sunset over the ocean", cfg_scale=undefined, height=1024, width=1024, sampler=None, steps=undefined, seed=0, safety_check=False, output_image_format="JPG", # Add additional parameters here ) if answer.image is None: raise RuntimeError(f"No return image, {answer.finish_reason}") else: answer.image.save("output.jpg")
On-demand deployments allow you to use Llama 3.2 90B Vision Instruct on dedicated GPUs with Fireworks' high-performance serving stack with high reliability and no rate limits.
See the On-demand deployments guide for details.