FireOptimizer: Customizing latency and quality for your production inference workload
By Fireworks.ai, |8/30/2024
By Fireworks.ai, |8/30/2024
At Fireworks, we've always believed that off-the-shelf models need to be adapted to meet production-grade performance. Today, we’re excited to introduce FireOptimizer, our adaptation engine designed to customize and enhance AI model performance for your unique use cases and workload. We have launched a new FireOptimizer feature: adaptive speculative execution, which delivers up to 3x latency improvements by tailoring speculative execution to your specific data and needs automatically.
Why It Matters: In today’s world, every millisecond counts. Whether you’re powering real-time customer interactions, processing large-scale data for intelligent search, or using AI to generate code, FireOptimizer simplifies the complex tuning work for optimizing latency and quality, and ensures your models are not just fast, but customized to perform at their best for your unique scenario.
The Benefits:
Many developers are surprised by the extent that results can vary serving the same model on the same hardware. For example, Llama 70B on eight GPUs in a volume-optimized set-up can be 4x cheaper per token than Llama 70B on the same eight GPUS optimized for single request speed. This variation occurs because text distributions and inference requirements may vary dramatically, and a one-size-fits-all approach often fails to deliver optimal performance.
Adaptation is necessary because multiple parts of the serving stack can be adjusted based on specific use cases, enabling organizations to maximize efficiency, quality, and cost-effectiveness. It involves a holistic approach that considers every layer of the deployment stack—hardware, model, and software—to ensure that all components are optimized to work together seamlessly. Adaptation is especially critical when building compound AI systems, where multiple models and processes interact, requiring precise alignment and optimization across the entire system to achieve peak performance.
FireOptimizer allows users to customize their LLM inference set-up for ideal cost, quality and performance. FireOptimizer enables improvements across three key layers of the inference stack:
FireOptimizer employs automatic capabilities to enable adaptation as well as extension to the traditional inference performance improvement approach. Some of these techniques include:
We have already shared customizable quantization based on quality and latency tradeoff curve before. In this post, we’ll focus specifically on adaptive speculative execution. Stay tuned for more deep dives and future releases.
FireOptimizer employs a new technique called adaptive speculative execution which improves performance by tailoring speculative decoding to your specific workload. For context, speculative decoding is a technique that parallelizes token generation to accelerate inference. Typically, a large language model (LLM) generates tokens one by one, but speculative decoding changes this by using smaller "draft" models to predict possible token sequences in parallel with the main LLM. The main LLM then verifies these predictions—if correct, the draft model's output is used; if incorrect, it’s discarded.
Speculative decoding works based on the two key ideas of :
This approach can significantly reduce latency without compromising the response quality, as the LLM ensures the final output remains accurate. However, speculative decoding that uses generic draft models has limitations.
Why Generic Draft Models Fall Short:
In traditional speculative decoding, draft models are typically trained on generic data, like publicly available conversation datasets. While this can work for general use cases, the hit rate—and thus efficiency—declines significantly when applied to more specialized scenarios.
Adaptive Speculative Execution:
FireOptimizer improves upon this technique, unlocking greater potential with ease. FireOptimizer’s adaptive speculative execution is built on two key concepts:
The advantages of this technique are:
By focusing on your specific needs, FireOptimizer ensures that speculative decoding delivers the maximum possible performance improvements.
Let’s take an example. We tested speculative decoding using a generic draft model for a customer with a very specialized use case. The generic draft model’s token distribution was so different from the target use case that speculative decoding actually increased latency by 1.5x**,** given the draft model’s poor hit rate of 29%. When using FireOptimizer, we achieved 76% hit rate with a 2x speed increase. Overall, FireOptimizer adaptive speculative execution can achieve 3x speedup over a generic draft model.
You might be wondering, “How do I get to a profile-driven customization?” or “How do I measure hit rate and compare the effectiveness of speculative execution?” At Fireworks, we prioritize user experience, so instead of requiring developers to manually train and evaluate draft models for speculative decoding, we’ve built infrastructure that automates the process, ensuring you get the maximum benefit effortlessly.
To use Adaptive Speculative Execution, users simply provide us with data representative of their use case. This can be done in two ways:
Fireworks takes care of the details, so you don’t need to evaluate model hit rates or manage hyperparameters. You’ll simply be presented with an improved draft model that can be deployed for latency improvements.
The ease of deployment ensures performance is always optimal. Like any tuned model, if your production data distribution shifts over time, your model will be less accurate and will need to be re-aligned. We’ve named the product “FireOptimizer” because you can easily sample traced or generated data to re-align your model and maintain maximal latency improvements.
Earlier this year, we rolled out adaptive speculative execution to a few beta users and we’ve seen significant improvements in latency.
Cursor
Cursor, an AI code editor company, saw a ~2x speed improvement from FireOptimizer. This latency improvement brings crucial user experience improvements, so that users don’t have to wait to incorporate code edits.
“We leverage speculative decoding for our custom models deployed on Fireworks.ai, which power the Fast Apply and Cursor Tab features. Thanks to speculative decoding, we saw up to a 2x reduction in generation latency. Enabling speculation was “one-click” simple and the speculators are automatically tuned to the text distribution our models produce.” Sualeh Asif, Cursor’s Co-founder
Hume
Hume builds AI systems that measure and optimize for human emotional well-being. One of Hume’s products is their Empathic Voice Interface (EVI), which is an API to understand not just speech, but also tone and emotion.
“We deploy custom models on Fireworks.ai to enable low-latency tool use through our API. For many applications that use Hume, latency is essential because tools are being called in parallel during real-time voice interactions. Fireworks unlocks a lot of applications by reducing latency with speculative decoding." Janet Ho, Hume COO
At Fireworks, data privacy and security are our top priorities. Adaptive speculative decoding is offered through explicit customer opt-in, ensuring that your data is never traced or logged without your consent. The data used for adaptive speculative decoding is solely for optimizing your deployment and is automatically deleted afterward. These practices are contractually enforced—feel free to reach out to a Fireworks representative for more details.
Fireworks has pioneered the systematic customization for inference with FireOptimizer in the industry. We believe the future of production AI lies in adaptation—where models and compound AI systems are precisely tailored to your unique use case, with hardware and software optimized for maximum efficiency. Adaptive speculative execution is the latest addition to Fireworks, complementing our lightning-fast inference engine and automated model tuning to deliver the best serving stack for your needs.
Adaptive speculative execution is available for users of enterprise reserved deployments at no additional cost. We’re also exploring bringing this feature to on-demand deployments on Fireworks’ public platform in the future. Meanwhile, other FireOptimizer features like customizable hardware mapping and customizable quantization are already accessible via self-service.
Contact us to enable FireOptimizer’s adaptive speculative decoding on your enterprise instance of Fireworks or get started with Fireworks on www.fireworks.ai. We can’t wait to see what you disrupt.