Bronto officially exited stealth mode recently when we made our first-ever public appearance at KubeCon Europe. Thankfully, the experience validated much of what we've been working toward. As we set up our deliberately minimalist booth (which, we're told, stood out refreshingly amid the visual noise), we weren't sure what to expect. What we found was a market that's not just ready for change—it's actively searching for it.
If there was one consistent reaction throughout the event, it was pure astonishment at Bronto's search performance. When we demonstrated searching through a week's worth of data (about 5TB from a CDN dataset) and returned results in under a second, attendees literally did double-takes.
"Just because you can't afford to store data doesn't mean it's not business-critical," became a rallying point in our conversations. Engineers nodded in recognition—they've been living with observability blind spots for too long, making impossible trade-offs between cost, coverage, and complexity.
One attendee put it perfectly: "You're doing in less than a second what takes our current solution 30 minutes—when it works at all."
We analyse terabytes of data in less than a second, and process petabytes of data in seconds.
We’re able to deliver industry-leading search performance thanks to automatic encoding optimized for the most widely-used log formats, scalable indexing built on the fly, and a custom search engine designed for high parallelism. Bronto uses a custom log-native execution engine powered by Bloom filters, function-level query evaluation, and a serverless compute model. By avoiding rigid indexing and heavy precomputation, and by using on-demand parallel execution, Bronto delivers consistent speed even when data volumes spike or formats shift.
Each value on that graph is taken as the average across 100 query executions evenly spread across a 48 hour period. And the log data on which the queries were performed are Fastly CDN logs.
The most energizing aspect of KubeCon wasn't just showcasing our technology—it was confirming that we're solving a universally acknowledged problem. Our "Logging is Broken" banner didn't require explanation; it served as a conversation starter because everyone already agrees.
Engineers, SREs, and security professionals from companies of all sizes stopped by to share their frustrations with egregious logging costs, limited retention windows, and the hodgepodge of supplementary tools they're forced to maintain. Many are managing 5-8 different logging solutions simultaneously, creating unnecessary complexity and administrative overhead.
Our positioning as "the logging layer for the AI era" generated significant interest, with many attendees curious about what this actually entails. While everyone wants to talk about AI, the market is clearly in flux regarding what observability means in an AI/LLM-driven world.
When we explained our focus on scale—acknowledging that log data volume will only grow as AI systems become operationally critical—it resonated deeply. Typically vendors and analysts estimate log volumes grow fairly predictably at about 20% YoY (which has been borne out by industry experience), but that number will skyrocket as organizations embrace AI.
There's clearly a vacuum around AI observability that Bronto is well-positioned to fill. There’s no point in solving for highly trained models and observability agents if all of your log data is in cold storage, wasn’t ingested in the first place or has been deleted. Adding to the challenge, most organizations battle costs by using on average 5-8 other subpar logging solutions, creating a disparate collection of “solutions” that’s complex and cumbersome to manage and scale.
In an AI first world there is a need for a single logging layer in organisations that provides fast, cost effective access and analytical capabilities to log data over both immediate (real time) and long term time frames (e.g. years).
KubeCon confirmed what we've believed since starting Bronto: logging is fundamentally broken, and fixing it isn't just about incremental improvements—it requires reinventing logging from the ground up for today's needs and tomorrow's challenges.
Our mission remains clear: create a unified logging layer that gives you complete coverage at a fraction of the cost, with performance that transforms how you interact with your data.
If you'd like to see the technology that had KubeCon attendees stopping in their tracks, schedule a demo today.