In 2025, AI-driven software development is rewriting the rules of enterprise performance. Very few leaders stand at the intersection of engineering precision and strategic innovation like Jeremy Burton, CEO of Observe. With a storied career spanning industry giants such as Dell, EMC, Oracle, and Symantec, Jeremy has continually shaped how technology companies think about scale, reliability, and data-driven growth.
Under his leadership, Observe is pioneering the next evolution of Observability powered by AI—where human intuition meets machine intelligence to transform how software is built, deployed, and maintained. From developing AI-SRE agents that accelerate troubleshooting by 10x to introducing o11y.ai, a new frontier for AI-native developers, Jeremy’s vision is reshaping how enterprises approach resilience, cost optimization, and engineering speed.
In this exclusive AI Technology Insights Top Voice Interview, Jeremy shares how AI observability is driving a new wave of enterprise reliability, the ethical dimensions of AI adoption, and what the rise of “superhuman” productivity will mean for the workforce of 2028.
The interview coincides with Observe announcing the availability of two new AI-powered agents, following the company’s recent $156M Series C funding round. This milestone offers an early look at what’s next for the fast-growing observability leader.
As infrastructure grows increasingly complex and telemetry data surges, Site Reliability Engineers (SREs) face constant pressure to maintain resilience while keeping costs under control. At the same time, AI-driven code generation accelerates software delivery, shifting the bottleneck toward operations and reliability.
Observe’s new AI agents address these challenges head-on—enhancing engineering productivity through intelligent incident investigation, automated remediation, and faster delivery of production-ready code.
Here’s the full interview.
AI Technology Insights (AIT): Hi Jeremy, welcome to the AI Tech Insights Top Voice Interview Series! To begin, please share a bit about your role at Observe and the career journey that led you to this position.
Jeremy Burton: Thanks for having me. I’ve spent my career building and scaling technology companies across enterprise software. Currently, I’m CEO of Observe and have been here since almost the very beginning, just the founding engineers were around when I joined so it’s been quite a journey. We’re on a mission to transform software development through Observability, which is really kicking into high gear with the emergence of AI.
Before Observe, I was part of the mega-merger between Dell and EMC. I ran all the engineering, product & marketing teams at EMC and then post-merger ran Marketing & Corporate Development for Dell. I’ve spent my career moving back and forth between Engineering / GM roles (Oracle, Veritas, Symantec, EMC) and Marketing roles (Oracle, Veritas, EMC, Dell) which has really helped me develop the broad perspective required to be an effective CEO.
Recommended: AITech Top Voice: Interview with Dana Simberkoff, Chief Risk, Privacy, and Information Security Officer at AvePoint
AIT: Jeremy, Observe has redefined observability with AI SRE and o11y.ai. How do you envision these agents reshaping the balance between speed, reliability, and cost for enterprise engineering teams in 2026 and beyond?
Jeremy Burton: When we say we want to transform software development through Observability, what we’re really saying is that we want engineers to ship more reliable code, more often – basically go faster and break as little as possible. At the same time, we think Observability costs in the industry are outrageous so we’d like to enable teams to execute on this transformation without breaking the bank. While we’ve always had an amazing cost value proposition, the introduction of AI – in the form of agents – is going to change the game for speed for troubleshooting in 2026.
In early customer deployments of our AI-SRE agent we are seeing 5-10x improvements in speed of troubleshooting. How do we do it ? First of all we eliminate the need for SRE teams to learn anything about Observe by giving them a familiar chat interface so they can use natural language to ask questions. The magic though is something we call the ‘knowledge graph’ – something that is unique to Observe. The knowledge graph adds structure, semantics and relationships to the mass of machine generated telemetry that Observe ingests. This is critical in order for the AI-SRE agent to actually understand what the user is asking and return accurate results. The AI-SRE analyzes all manner of errors and issues, returns to the SRE appropriate data and suggests next steps. It also embeds context-sensitive links for the SRE so at any point they can jump into Observe and continue their analysis making it the ultimate on-ramp for any investigation.
The economics of Observe also plays a role here. If SREs can ask questions in a frictionless way using a natural language then Observability bills are going to get even more expensive. In addition, SREs will find blind spots in their data more quickly as data may have been filtered out or deleted entirely to reduce costs. Observe’s Data Lake-based architecture, coupled with elastic compute ensures that more data can be ingested, stored, queried more economically than ever before.
The o11y.ai agent is similar to the AI-SRE, in fact they share exactly the same Observe back-end to ingest, store and query data. That said, o11y.ai has an all-new front-end thought enabling developers to go one step further and automatically instrumenting their code … and providing the ability to correlate errors seen in production with the actual code itself. Put another way, o11y.ai can instrument, debug, and fix code all from a simple chat interface. Initially we’re focussing o11y.ai on the Typescript developer community – the language of choice for most new AI projects – and it delivers a truly AI-native experience for developers.
AIT: The Gartner survey underscores operational resilience and cost optimization as top priorities for CIOs. How is Observe positioning itself not just as a technology provider but as a strategic partner in achieving these business-critical outcomes?
Jeremy Burton: Observe’s focus is on enterprises scaling rapidly or running at scale, by that we mean 5-500 TiB of data per day. In these companies we have found observability to be a strategic project with top level mandates for change. We translate those mandates into strategy, architecture and implementation to achieve the desired outcome.
As an example, we have a large banking customer for whom operational resilience was non-negotiable. We architected a solution whereby the bank could failover their observability from one region to another, which we activated during the recent AWS outage. Both the architecture and the economics of Obvserve made this possible – this kind of a system is an insurance policy so the failover site can’t be the same cost as the primary. At the same time the system had to be running with fresh data less than 15 minutes after the failover.
Another instance where we work closely with customers is the formulation of an open data strategy. Most enterprise customers will very quickly have tens or hundreds of petabytes of data under management. This data is enriched by Observe and that enriched data is often valuable to more than the SRE team so sharing and collaborating on that data is key. For this reason Observe was the first Observability vendor to explore Apache Iceberg as the native format for storing telemetry data in its (or the customers) data lake.
Recommended: AITech Top Voice: Interview with Peter Weckesser, Chief Digital Officer at Schneider Electric
AIT: Tell us a bit about your AI Ethics and Responsibility framework at Observe. How have these evolved in the past 18 months?
Jeremy Burton: Observe does not create or train its own models, nor is any data we send to OpenAI or Anthropic (for example) persisted and used to train their models. In addition, when SREs use Observe we are troubleshooting infrastructure and applications – largely focused on technical issues.
For these reasons, we have not seen the need for us to develop our own AI Ethics and Responsibility framework.
AIT: Many AI tools today still work in isolation, requiring repeated prompts and manual context. How does Observe address this gap, and what impact do you hope it will have on daily workflows?
Jeremy Burton: Fragmented tools with an AI interface (such as MCP) can be connected easily into agentic workflows but agentic workflows do not fix the problem of fragmented data.
We believe that to have good observability, all telemetry data should flow into a unified data lake is a foundational step. It’s actually a fairly simple step for almost every enterprise to make because they usually only keep data for 3 or 7 days due to budget constraints. Observe’s magic is transforming that low-level telemetry data in the data lake into a knowledge graph that contains all the entities (pods, containers, customers, shopping carts) and relationships needed to provide context and structure – critical for an Agentic AI workflow to retrieve accurate answers to any question it may ask.
With this technical underpinning, the agentic-AI workflow becomes much more user-friendly. Users can prompt the system with a broad range of sophisticated questions and – because Observe has all the data and context – expect an accurate result.
AIT: How do you balance the richness of AI insights with user trust and data security, especially in sensitive enterprise environments?
Jeremy Burton: There’s great potential in AI-driven observability, and we recognize we have users who operate in sensitive environments, so we’re transparent about how Observe interacts with AI models.
When building AI features, there’s no getting around sending data to model providers like OpenAI and Anthropic so it can reason over your datasets to answer questions. We only send what’s minimally required, mainly prompts, dataset schemas, stats, summaries, and a handful of sample rows. All requests use Zero Data Retention, a policy where no user data is stored, logged, retained, or used for training by the AI service after the request is processed.
AIT: You’ve emphasized that AI code generation has shifted the bottleneck to system reliability. How do you see AI SRE bridging this new gap? What does it mean for the future role of Site Reliability Engineers?
Jeremy Burton: We will likely generate more code using AI in the next 5 years than all code written since the beginning of time. Gone are the days when you can call or meet with the person who wrote the code to understand it. That burden now falls on the individual who is prompting the code generation, or the SRE team.
The good news is that AI-assistants can help here as well. SREs have long been overworked (and I’m sure many will argue underpaid!) and there has been a constant drumbeat in the community for engineers to actually engineer observability into their code. Put another way – “figure out how to fix your own sh*t”. This is starting to happen and we should expect that a chunk of SRE work will “shift left” and be taken on by software engineers as they are able to access both code and telemetry from that code running in test or even production from within their IDE.
Similarly, the AI SRE will also serve as a valuable partner for SRE teams in quickly and accurately analyzing and fixing issues in unfamiliar code. One of our early users even commented that the AI SRE knows more about their products and services than their best engineers. We’re confident that AI-assisted troubleshooting will be a major unlock for SRE teams and could even be used by less technical users – in support for example – to accelerate triage. Worst case, by handling the repetitive pattern recognition and context gathering, the AI SRE will free up SREs to get a better nights sleep but also work up a level or two thinking more about reliability strategy.
AIT: Which AI startups and projects are you keenly following? What kind of projects deeply influence your vision for the future?
Jeremy Burton: Like everyone else we are keenly following developments with OpenAI’s Codex and Anthropics Claude Code. Those two models are the underpinnings of both AI-SRE and o11y.ai and we typically target use cases on the very edge of what they can do and expect that improvements will come in 60-90 days. We also have been following the OpenLLMetry project which provides us an open source way to better provide Observability for LLMs – it is in fact embedded in our LLM Explorer.
AIT: Looking ahead, how do you see AI fully integrated into workplace productivity over the next 3–5 years? What does a ‘superhuman’ worker look like in 2028?
Jeremy Burton: Not sure whether we will see ‘superhumans’ or ‘superintelligence’ in 2028 but we for sure are going to see huge workplace productivity gains. With AI tooling, the grunt work is easier – you can get to 70-80% of the problem you are dealing with very quickly. There will still be humans-in-the-loop in most instances but we will be checking up, tweaking and approving for the most part. And when there’s a really tough assignment or incident to deal with, human effort can be focused where it’s needed most and not doing the basics – like collecting the data required to begin the troubleshooting.
If ‘superhumans’ are simply AI-assisted humans then their characteristics will include the ability to:
- Cycle faster from hypotheses through decisions, shrinking the time between insight and action with AI help
- Shift to higher-value work, spending more time on design, strategy, and interpersonal relationships, while delegating rote tasks to agents
- Work across domains, such as moving seamlessly between coding, observability, data science, and business analysis
From an engineering productivity perspective, every engineer will have a side-kick agent whose job is collecting and correlating telemetry, pinpointing root causes, and recommending instrumentation. The human’s job shifts to supervising that side-kick and making higher-order decisions.
AIT: For organizations exploring AI to enhance efficiency and collaboration, what would you recommend as best practices for adopting AI without creating workflow friction?
Jeremy Burton: Start with well-defined use cases – have a good idea of what you want to accomplish when bringing in AI. Have outcomes and metrics you are tracking. With o11y.ai, we give our users a headstart on this by showing an o11y score, signifying observability coverage, but you could also track something as simple as time saved to perform a task.
Ensure you have clean, well-formed data, because AI is only as good as the data behind it. One way we’ve tried this at Observe is by applying structure to raw telemetry and using our knowledge graph to provide essential context to AI. You get better, more accurate responses this way.
Beware of tool sprawl. You don’t want to introduce AI tools that perpetuate silos and become one more tool on top of your existing number of tools. Seek solutions that unify data and reduce fragmentation.
AIT: Tag a leader in the industry you would like to recommend for the “AI Tech Top Voice Interview Series”.
Jeremy Burton: Matt McClernan at Augment Code.
Thank you, Jeremy, for sharing your valuable insights with AI Tech Insights. We look forward to featuring your perspective and continuing this important conversation on the future of the AI productivity ecosystem
Recommended: AITech Top Voice: Interview with Carmit DiAndrea, Director, AI Data Management at NiCE
To share your insights, please write to us at info@intentamplify.com



