Fireworks AI raises $250M Series C to power the future of enterprise AI. Read more

Blog
Series C

We raised $250M To Help Enterprises Own Their AI

Series C

Fireworks AI Raises $250M Series C to Power the Future of Enterprise AI

Three years ago, before AI had taken over the world, my co-founders and I made a bet. We believed the future of AI wouldn’t be controlled by a handful of powerful foundation model labs, but distributed across thousands of enterprises that want to own and customize their own AI products.

That founding thesis has paid off.

Today, we’re announcing a $250 million Series C at a $4 billion valuation, co-led by Lightspeed Venture Partners, Index Ventures, and Evantic, with continued support from Sequoia Capital. This round, which includes primary and secondary funding, brings our total funding to over $327 million, with prior rounds led by Benchmark and Sequoia, and strategic participation from NVIDIA, AMD, MongoDB and Databricks.

We raised this capital to meet surging enterprise demand for our production AI infrastructure and to cement our position as the market leader in AI inference.

From PyTorch to the AI Cloud

When we left PyTorch to build Fireworks in 2022, companies were just beginning to experiment with large language models. Enterprise AI initiatives often lingered in the pilot phase for months, if not years, tethered to closed APIs from a few foundation model providers.

We believed a different future was possible — one where open-source models would converge on their proprietary counterparts and where companies could own their AI, end-to-end. Drawing from our experience scaling AI infrastructure at Meta, we built Fireworks as an AI Cloud for enterprise developer teams — giving them the speed, cost efficiency, and control they need to build production-grade AI.

Scaling Enterprise AI in Production

That vision is now a reality.

Our customers, including Samsung, Uber, DoorDash, Notion, Shopify, and Upwork, are moving AI from pilot projects into the core of their business operations. They’re scaling to millions of users worldwide without compromising quality or cost.

Fireworks now powers over 10,000 companies (a 10× increase from our Series B) and serves hundreds of thousands of developers building customized AI applications. Our platform processes more than 10 trillion tokens per day, and our annualized revenue has surpassed $280 million.

The Fireworks platform offers access to hundreds of state-of-the-art open-source models across text, image, audio, and multimodal formats. We enable fine-tuning, reinforcement learning, and model evaluation — all built on our ultra-fast inference engine, delivering up to 40× faster performance and an 8× reduction in cost compared to other providers.

One-Size-Fits-One Inference

We attribute this growth to our belief in one-size-fits-one AI, not one-size-fits-all. Generic foundation models solve generic problems, because frontier labs can only train models on publicly available internet data. But the majority of valuable data lives inside enterprises and their applications: user interactions, domain-specific workflows, proprietary knowledge bases, and behavioral patterns.

Our platform helps developers tune their models for specific use cases using this enterprise- and application-specific data, making inference faster, cheaper and higher quality. As customers interact with these customized applications, new data continuously feeds back into and improves the model. When a user corrects an output, ignores a suggestion or discovers new ways to solve a problem—all that data improves the model, which improves the application. The product and model evolve together in a perpetual loop.

This is product-model co-design: a tight data feedback loop with continuous evaluation and reinforcement learning that allows enterprises to improve their AI applications over time, optimizing for cost, speed and quality simultaneously. This is how enterprises build a competitive moat with AI, and it’s the foundation of artificial autonomous intelligence.

What We’re Building Next

With this new round, we’re accelerating toward this vision. We’ll invest in three key areas:

  • Deepen Research in Tuning and Inference Alignment: Advancing the science of computational efficiency through breakthroughs in systems and algorithmic research.
  • Expand Our Product into a Comprehensive AI Creation Toolchain: Extending from model evaluation and reinforcement learning to end-to-end model lifecycle management — enabling the next generation of user experiences built through product-model co-design.
  • Scale Global Compute Infrastructure: Growing our computation footprint 3–4× over the next year while continuing to minimize cost per token and maximize system utilization.

Building the Infrastructure for the Greatest Business Opportunity in History

AI is the greatest business opportunity in history — but enterprises shouldn’t have to choose between a handful of tech giants to participate in it. They should be able to build, own, and control their AI infrastructure from the ground up.

This funding will help us meet that mission. We’ll expand our global operations, grow our engineering and GTM teams, and deepen our partnerships across the cloud, model, and hardware ecosystems.

At Fireworks, we believe that behind every magical AI experience, there’s Fireworks AI — where AI runs faster, and enterprises stay in control.