We’re Noel and Trevor, co-founders and co-CEO’s of Bronto - where we are on a mission to reinvent logging from end to end for the AI era.
We have been building SMB, Mid-Market and Enterprise facing technology companies in both the US and Europe from the ground up for over 20 years now. We’ve spanned a very wide array of categories and technologies from ‘Email’ to ‘Logs’ to ‘eCommerce Analytics’ to ‘Voice-AI’ to ‘Customer Intelligence’ to ‘Digital Healthcare’, ‘Search’ and more.
We’ve been very fortunate to work with incredible people and build amazing teams – many of whom we’ve worked with again and again on multiple businesses. We’ve also been very fortunate to originate, develop, scale and sell our companies to the likes of Rapid 7, Apple, Publicis and many more of the world’s largest technology companies.
On our journey to date we have been founders, co-founders, investors (both private
and institutional VC), dirt-road CEOs, Chairman – both exec and non-exec and very
often we’ve worn multiple hats at the same time. To date, we have always focused on
creating enterprise value up to US $300m and building companies from zero revenue to $30m + and we’ve always been exit-focused from the start. Until now that is.
Our team has 150 years+ of log-domain specific expertise — this is our fourth time building a logging platform. And along the way we’ve had lots of industry firsts:
Our logging journey started when building an on-prem logging platform for IBM - as part of our post doctoral research - in the mid 2000s. We then started Logentries in 2010, which attracted thousands of SMB and mid-market customers, before being acquired by NASDAQ-listed cybersecurity leader Rapid7 in 2015. From 2016 to 2022 we built our third logging platform as part of Rapid7's InsightIDR security platform. We scaled to thousands of mid market and enterprise customers, built a petabyte scale technology that was ultra efficient and ultra reliable. InsightIDR grew from being a brand new product when we joined to becoming the engine that powered a US $5 billion business at its peak.
In late 2023 when we started to look closely again at the observability space a few things that were readily apparent felt very off -
As we started to dig deeper, we were astonished to see that logging alone was in fact in a much worse state than when we started Logentries a decade earlier, and every tech organization seemed to be afflicted by this in some way. Months and months of diligence including conversations with Mid-market and Enterprise users and customers, across all industries and geographies confirmed everything we believed about how companies were logging in 2024….it was fundamentally broken and nobody was addressing it at a fundamental level.
Bronto was thus born. We are now on a mission to build the logging layer for the AI era, which will involve fundamentally reinventing logging from end to end. We envisage this being a 10-20 year journey, but as a team we are motivated by the challenge and committed to seeing it through. As a founding team we are at a stage in our lives where we are motivated to do it right and take the long view. We are super excited for this journey.
By definition a log is a record of a specific system event. Everything that happens on a system can be recorded in the system’s log data. Back in the day, we used to talk about logs as akin to “CCTV for your systems”, which we think still holds somewhat true today. Not unlike CCTV which has become much higher fidelity with the advent of HD cameras etc. logs have become richer and richer (thanks to OTel and wider log events) making them more useful than ever.
Logs are the essential foundation of observability:
Logging, and observability in general, are fundamentally broken. This is a universal problem that every tech organisation is grappling with today.
Logging is shockingly expensive. By default, most organisations go with one of the well-known logging/observability vendors (think Splunk, New Relic, Data Dog etc.).
But the high costs of such platforms mean they have to tightly manage how much data they can 'fit' into them. This quickly creates a coverage problem as organizations reduce the retention period down to as little as 3, 7 or 15 days, and high volume logs are simply too expensive to even consider capturing.
Faced with a lack of coverage and blackspots organisations turn to sub-par logging solutions that on the surface look cheaper, but when total cost of ownership is baked in can work out even more expensive than the big vendors whose egregious pricing they are trying to solve for. Typically companies use some of the cloud platforms logging tools with sub-optimal experiences (Cloudwatch, Google Cloud Logging, Azure Log Monitor) etc or start to roll their own using open source (ELK, Clickhouse etc) as well as sending data to object store and using tools like Athena when analysis is required. What results is a hodge podge of logging infrastructure that is not fit for purpose, creates a painful management overhead and introduces needless complexity. In our experience the internal logging infrastructure is usually hacked together from at least 5, and quite often over 7, different solutions and technologies.
Pricing of Logging solutions today - Datadog, Splunk, New Relic, etc., - formulated as part of Business Models created in the pre-separation of Compute and Storage Era
In an attempt to combat the coverage compromise, companies use on average between 5-8 other subpar logging solutions, creating a disparate, hodgepodge of ‘solutions’ that’s complex and cumbersome to manage
Egregious costs means companies must be highly selective when it comes to which logs to ingest and which logs to retain and for how long - resulting in very scant log coverage
The immediate problem is a trifecta of high costs, lack of coverage and complexity. We call this the flywheel of compromises that every organization today is faced with when it comes to logging.
The wider problem however is that organizations and end users have come to accept a dysfunctional end to end logging experience - even with 'best of breed' providers.
Configuration is complex, whether its deploying vendor specific agents, setting up grok parsers, writing rules to remove PII data (that will commonly have gaps), use of regex and vendor specific query languages for searching and analytics, management of complex retention policies, managing hot vs cold data, archiving and rehydration of data (and again managing of archiving retention policies with the cloud provider), interacting via a 15+ year old user interface, opaque billing and the inability to understand what and who are driving costs/data volumes, 15 year old pricing and business models… etc.
Right now logging is like the vacuum cleaner industry before Dyson, electric cars before Tesla, “smart” phones before the iPhone.
Centralised logging entered the mainstream with the rise of the cloud and SaaS. It had been there for security/compliance a little earlier driven by the likes of Sarbanes-Oxley legislation and regulation (for more on this here’s a great YouTube clip on the history of logging from Raffael Marty, one of the founders of Loggly).
Vendors like Loggly, Logentries, Papertrail, Sumologic etc. popped up to provide the first cloud based centralized logging services. These services charged on a per GB ingestion and storage model for a given retention period. For example, at the time Logentries came in around $1.50 per GB for 30 days retention. End users could now log into a centralized service to see all their cloud logs in one place…voila!
For vendors this was a great business. Logging platforms are a lot like bank accounts…as an end user you don't want to go through all the hassle of switching providers unless you really have to. The business also has built-in growth because log volumes tend to grow significantly year on year. Even if you don’t add new customers, your existing base can grow your revenue nicely — at Logentries we used to grow significantly month over month from our existing base alone.
Not surprisingly vendors prioritised their “innovation” efforts on “higher value” features like complex query, alerting and dashboard capabilities (joins, lookup tables, machine learning based alerting etc) that could be used to justify the customer's price tag growing year on year. These capabilities were great demo-ware but rarely used. This became the dirty secret of logging vendors – innovation on behalf of the vendor….not innovation on behalf of the customer.
This practice continues today. Along with the same 15 year old business model, where vendors charge per GB ingested and stored.
For example Kevin Lin's superb analysis of logging vendor costs shows the per GB price of 'best of breed' vendors coming in between $4 and $5 per GB for only 14 day retention. Furthermore with the arrival of containers and the likes of Kubernetes, log volumes have exploded. So not only are volumes increasing significantly year on year, but the price per GB has also continued to increase (even as cloud costs have decreased and innovation has significantly lowered the cost to the vendor - e.g. separation of storage and compute).
Vendors have tried to normalise this by claiming it is best practice to spend up to 30% of your total infrastructure costs on observability. We think that's like going out and spending $200k on a new Porsche and then being asked to spend another $60k on the dashboard…wtf?!
As a result vendors have frankly been making out like bandits…e.g. Datadog's logging business, which is just 6 years old, hit $600m in revenue at the end of 2023 and now accounts for 30% of total company revenues. At its current growth rate it will be closer to $1bn in revenue by the end of the year and is one of the main pillars of DataDog’s ~$40bn market cap!
Vendors have been innovating to grow revenue not to solve customer problems. The customer problem today is a cost, coverage and complexity problem. This is not solved by adding more 'features'. Spending 30% of your infra costs on observability is out of balance. When things fall out of balance however, they tend to get corrected over time.
With the arrival of Gen AI and the new Agentic world it is ushering, this problem is set to be exacerbated as log volumes will increase yet again (and likely significantly). Yet log data will be even more relevant as service as a software becomes a reality in fields like devops as a software, developers as a software, support as a software etc. where you will be able to spin up 1000 Devops agents during an outage to trawl your logs in different ways and identify the issue.
To be fair, new entrants to this space have identified the problem and are coming in with lower pricing models which are more closely aligned with this new age where one can separate storage and compute and use much more efficient indexing (e.g. bloom filters) and storage (e.g. columnar stores). However they tend to be mere iterations of the incumbents (e.g. cheaper logs, traces and metrics) and are falling into many of the same traps of the past where their solutions are based on Opensource DBs that have not been designed for logging at scale. In the 2010's many used elastic search and had horror shows when it came to scaling clusters, rebalancing data etc. Today many have decided to run on the likes of Clickhouse - which is a super technology and an easy way to get started for logs you can control the structure of (the team at Shopify have used this very effectively to build their own internal logging infra). But again not designed from the ground up for logging vendors managing 1000s of clients log data and in turn runs into issues at scale, when building a logging platform where you can not control the shape/volume of logs and are designing for beyond PB scale.
At Bronto, we think there’s a better way. We imagine, and are building for, a world where…
Imagine a single logging layer within your organisation that “just works”…….
In the Fall of 2024, Bronto entered the market with a best of breed logging solution. We wanted to provide a Datadog-like experience, Splunk-like performance, but at 10% of the cost AND with 12 month default retention.
We focused on delivering core capabilities and cut out the feature bloat that nobody uses. We talked to dozens of professionals who use logs in their daily work and asked what features they actually needed to do their job. This eliminated a bunch of bloatware that the incumbents provide but that nobody uses. Proving the point we've barely had a feature request for anything beyond these core capabilities.
Bronto today immediately solves the cost, coverage and complexity issue for our customers and has been proven out by their experience to date. We've had hundreds of conversations with CTO's, VP/Director Eng, VP/Director Infra, Security team leads and managers as well as Engineers, SREs & Devops folks. The most common phrase we hear after introducing the problem and Bronto solution/approach is "Every single word resonates" or some version of the same. As a result we are onboarding customers at pace and continuing to learn, iterate…and repeat.
Interestingly we often comment after giving a demo "Our demo isn't that impressive is it? We simply do the basics of providing fast search across PB of data, easy to configure alerting, and intuitive/super fast dashboards, but most of the magic is actually under the hood”.
But customers are telling us that is exactly what they need. A single place for all their logs that just works and is not crazy expensive. Where their team can easily search over months of data, set up alerts and dashboards, and integrate with our other tools. They say they don’t need all the complex capabilities that other vendors provide.
While v1.0 of Bronto is resonating with customers, we believe that the best of breed today is nowhere near good enough. Instead we believe that to provide a single logging layer for every organization that is fit for the AI era, logging needs to be re-invented from end to end.
Starting at the beginning we have already built capabilities like auto parsing of log data using Gen AI (removing the need for parsers), as well as log hygiene capabilities and automatic volume control (to prevent uncontrolled overages). We have also “innovated” by building a really intuitive usage explorer that shows exactly what logs are driving volumes, what teams are responsible for these, who in your organization is searching them, and what is the search volume and latency. Ultimately these features show you where cost is coming from and if or where value is being attained.
It's hard to believe that understanding log volumes, how they relate to cost and how you are deriving value from the platform could be considered innovative in 2025. But logging platforms today are stuck in the old world of innovating for themselves rather than the customer and they have intentionally made understanding what is driving costs really difficult. It’s actually nearly impossible in some cases.
Understanding your log data is so key in terms of volumes and where they are coming from, but also in terms of what is valuable and what is not. In fact, today most organisations' logs are typically a mess. A common refrain from our customers before onboarding in fact is, “hey we are excited to try Bronto…but we just need a few weeks to clean up our logs first”. Our view is that logging providers need to handle this on behalf of their customers… provide the ability to clean and structure logs on the way in and go from a world where logs are seen as high volume but low value to a world where logs are high volume and high value.
“Our mission is simple. Solve the problems others ignore” James Dyson
After years of working on search performance and reducing search latencies, our chief architect and longstanding team member Dr. David Tracey, came up with the following “law” – the faster you make your log search, the more data your customers will search. We called it Tracey’s Law. It was an inside joke at first, but not only does it continue to hold to this day, we are seeing it becoming more and more relevant and even more pronounced.
We’ve since also noticed that the faster we make our log search and the longer a customer's retention, the more data our customers search.
And further again, the faster you make your log search and the longer a customer's retention and the cleaner the log data, the more data our customers search.
We envisage a world where companies' log data lives in one place, is super clean and structured, is easy to access, search and understand and whereby organisations can become ‘log-first’ companies again.
Integral to this journey we will have a maniacal focus on:
1 - Driving low Costs
2 - Maintaining ease of use and elegance
3 - Driving real business value from logs.
We call this our flywheel of customer-first innovation.
Thank you for reading this far. If any of this resonates, please book a demo and come join us on our journey to reinvent logging.
Noel & Trev