Let's face it - logging is broken. Not just a little broken, but fundamentally misaligned with the needs of modern engineering teams. At our recent AWS Summit talk in London, Benoit Gaudin (our Head of Infrastructure) and I shared Bronto's vision for fixing this mess once and for all.
If you're running any significant infrastructure today, you're probably stuck in what we call the "3C flywheel of compromises":
Sound familiar? I thought so.
This isn't just inefficient — it's actively harmful. Your engineers are probably building parallel solutions just to get basic visibility because your main tool is too limited, too slow, or too damn expensive.
Here's the thing: logs aren't just some compliance checkbox anymore. They're your operational ground truth in the AI era.
They feed your LLMs. They power your agents. They're your audit trail, your RAG source, your behavioral training set. And one log message from an LLM-based system might contain 50-100 nested events in a single payload.
Try scaling that with a solution built before the separation of compute and storage was even a thing.
We built Bronto to tackle this head-on with three non-negotiable capabilities:
Our platform is built natively on AWS (S3, Lambda, DynamoDB), but engineered so you don't have to deal with the complexity of pipelines, pre-processing, or glue code.
Benoit went deep on our architecture during the talk, and I want to share that because it's central to how we deliver on our promises:
Our ingestion layer accepts data from standard sources - OpenTelemetry collector, FluentD, FluentBit - through HTTP endpoints, with AWS EC2 load balancers and instances doing the heavy lifting. We buffer through Kafka (AWS MSK) like many systems, but then we diverge from the pack.
Instead of traditional approaches, we process data from Kafka and write to S3 in a proprietary format that leverages techniques from data analytics: data partitioning, bloom filtering, push predicates, compression, and columnar-based formats. Our metadata lives in DynamoDB for speed.
The real magic happens with search. When you query through our UI or API, we're not just scanning indexes - we're launching Lambda functions in parallel that process data directly from S3. This is crucial because it means we don't have to overprovision for big queries. We scale horizontally on demand and only pay while those functions run.
This architecture is what enables both our performance (subsecond on terabytes, seconds on petabytes) and our pricing model. We're not running expensive clusters 24/7 - we're using cloud resources exactly when and where they're needed.
Don't just take my word for it. Let me tell you about a couple of teams that have transformed their logging approach with Bronto (I won’t name them, but you’ll get the picture).
This team runs a massive content delivery and experience platform, serving APIs behind a global CDN for websites, mobile apps, and e-commerce systems. Every request hits their API with a unique key, and their team needs to trace errors, group by status codes, and export logs to customers.
Their exact words? "Bronto changed our lives." A logging tool. Actually improving engineers' lives. That's practically unheard of.
This company might look a lot like yours - running a suite of SaaS tools across distributed cloud services and product lines.
They went from managing logs to actually using them.
Here's the truth: your log data is massively undervalued — not because it lacks signal, but because your current tooling hides that signal behind cost barriers, friction, and compromises.
Logs used to be a liability. With the right approach, they can be your secret weapon.
We're building Bronto to be for logging what Dyson was for vacuum cleaners, what iPhone was for smartphones, and what Tesla was for electric cars — a complete reimagining of what's possible when you refuse to accept the status quo.
After all, when was the last time your logging tool made your life better instead of worse?