In this edition of the AITech Top Voice Interview series, we are excited to feature Paul Walker, Senior Vice President at Omada. With a career spanning more than three decades across engineering, product management, and enterprise software, Paul has witnessed firsthand the internet boom of the ’90s, the global shift to cloud, and now the transformative rise of AI.
About Paul Walker: Paul Walker leads strategy and innovation around identity governance in the age of AI. With more than 30 years of experience in enterprise software, Paul has held leadership roles across engineering, technical sales, and product management. His career has taken him from early internet startups in the 1990s to large-scale global software providers, giving him a front-row seat to three major technology shifts: the rise of the internet, the move to cloud, and today’s AI-driven transformation.
About Omada: Omada is a global leader in identity governance and administration (IGA), empowering organizations to manage and secure access for both human and machine identities. Founded in 2000 and headquartered in Denmark, Omada delivers cloud-first solutions that help enterprises stay compliant, reduce risk, and enable business agility. The company’s platform combines advanced automation, AI-driven insights, and seamless integration across hybrid and multicloud environments, making it a trusted partner for enterprises navigating digital transformation.
Here’s the full interview.
AI Technology Insights (AIT): Hi Paul, welcome to the AI Technology Top Voice Interview Series! To begin, please share a bit about your role at Omada and the career journey that led you here.
Paul Walker: Hi, it’s a pleasure to be here and to be interviewed! I’ve been working in the software industry for about 30 years now, covering a range of roles – from engineering to technical sales to product management. My journey has taken me from ’90s early internet startups all the way to large global enterprise software providers, and plenty of interesting stops in between.
Over that time, I’ve had a front-row seat to the growth of the internet, the shift to cloud, and now the rise of AI. Each of these waves has created incredible business opportunities, but also introduced some really interesting security challenges along the way.
Recommended: AITech Top Voice: Interview with Dan McAllister, SVP of Global Alliances at Boomi
What really attracted me to Omada was the company’s commitment to cloud-first identity and its vision for how AI can play a real role in improving outcomes. I’m excited by the idea of helping different personas who depend on identity governance in their business activities get more value, more efficiency, and more confidence from the solutions we deliver.
AIT: Machine identities now outnumber human ones in most organizations. Why is this such a critical inflection point, and what sparked your focus on this challenge?
Paul Walker: Great question. Today, machine identities, including service accounts, APIs, bots, workloads, and now AI agents, have silently grown to outnumber human identities by a staggering margin in most organizations. In fact, recent research shows there are about 82 machine identities for every single human identity. This is not just a shift in scale, but in structure: machine identities are now the fastest-growing segment of the identity landscape.
What’s accelerating this? The rise of Agentic AI – autonomous or semi-autonomous software agents that act on behalf of a user or organization to make decisions and perform tasks. As these agents proliferate across business functions, they require unique, verifiable identities to authenticate, authorize, and interact with other systems, data, and applications. Each AI agent effectively becomes a new kind of nonhuman actor — one that must be governed with the same rigor as any other digital identity.
Unlike human users, machine identities — especially those linked to AI — are created at high velocity, often span hybrid or multicloud environments, and frequently interact with sensitive systems. They don’t follow predictable joiner-mover-leaver processes. They’re ephemeral, dynamic, and deeply embedded into orchestration platforms, CI/CD pipelines, and decision-making systems.
The problem is that traditional IAM models were never designed to handle this complexity. Many organizations still use static credentials, hardcoded service accounts, or have no visibility into what identities AI agents are using — let alone how those identities are behaving. This creates enormous blind spots and expands the attack surface.
As AI becomes embedded in workflows and business-critical decisions, failing to secure machine identities doesn’t just represent a compliance issue – it becomes a systemic business risk. And with regulations here in Europe like DORA and NIS2 pushing organizations to demonstrate control over all digital identities not just human ones – the urgency to integrate machine identity governance into your broader IGA strategy is clearer than ever.
So, what sparked my focus here is watching how the identity landscape is fundamentally evolving. It’s no longer just about managing people. It’s about securing this vast, fast-moving ecosystem of human and nonhuman actors — and Agentic AI is now at the center of that evolution.
AIT: With the rise of DevOps pipelines, AI agents, and custom-built applications, companies are faced with ownership, visibility, and control. From your vantage point, what are the most common blind spots enterprises face in managing non-human identities?
Paul Walker: The most common blind spots I see in managing non-human identities come down to three things: ownership, visibility, and consistency.
First, ownership is often undefined machine identities like service accounts or AI agents are created by developers or systems, but no one is clearly accountable for them. This leads to orphaned or unmanaged identities that are rarely reviewed or decommissioned. Many organizations don’t have a clear policy or even a consistent definition for machine identities. When an AI agent is spun up by a business unit, or a new microservice is deployed via a CI/CD pipeline, it often bypasses traditional IAM controls. These identities are typically provisioned without defined ownership, and once created, they’re rarely reviewed or revoked. This leads to orphaned or “zombie” identities that persist long after their original use case has ended a major attack vector.
Second, visibility is poor across the lifecycle. Unlike human identities which follow a structured joiner-mover-leaver process, non-human identities are often ephemeral, automated, and decentralized. That is, machine identities spin up automatically, operate at scale across DevOps pipelines or AI workloads, and often fly under the radar. Organizations can’t protect what they can’t see!
Third, policy enforcement is inconsistent, especially across hybrid and multicloud environments. AI agents or workloads in one domain might follow best practices, while others rely on hardcoded credentials or lack audit trails altogether.
Agentic AI is intensifying these challenges. Each AI agent is not just a consumer of identity it’s an actor in its own right, capable of initiating actions, accessing sensitive data, and making decisions. As these agents scale across business functions from finance to cybersecurity they require unique, auditable, and governed identities. Without a comprehensive machine identity strategy, these AI-driven identities operate in the shadows.
The bottom line is this: most organizations are still managing machine identities as an afterthought, using fragmented tools and manual processes. But in a world of self-provisioning services, autonomous agents, and software-defined infrastructure, that’s no longer sustainable. The identity perimeter has shifted and non-human actors are now at the center of risk and control.
Recommended: AITech Top Voice: Interview with Yaerid Jacob, Founder and CEO at Blueprint Data Centers
AIT: Shadow IT, unchecked automation, and mismanaged credentials all pose big risks. What practical strategies do you recommend for mitigating these vulnerabilities without hindering innovation?
Paul Walker: That’s a really important question and one we’re hearing more often as AI agents, DevOps pipelines, and self-service tools accelerate. The challenge is real: Shadow IT, unchecked automation, and mismanaged credentials are creating a growing identity risk surface, but locking things down too tightly can stall innovation. So how do we strike the right balance?
First, it starts with automation guardrails not gates. You want to let teams move fast, but within well-defined identity policies. For example, you can embed guardrails directly into CI/CD pipelines or infrastructure-as-code so whenever a new machine identity is spun up, it’s automatically tagged, expires after a set period, and follows naming conventions. That way, developers don’t have to think about governance it’s just built in.
Second, credentials need to be centralized and automated. Hardcoded secrets and unmanaged service accounts are still far too common. Tools like secrets vaults or workload identity providers can automatically rotate credentials and enforce least privilege and they integrate natively with DevOps workflows. So you reduce risk without slowing anything down.
Another key one is continuous discovery. You can’t govern what you can’t see. That’s why having identity analytics or machine IAM tools in place to continuously discover, classify, and monitor non-human identities is so valuable. It gives you the visibility to spot shadow identities or drift before they become a problem.
And finally, assigning ownership is critical. Innovation is great but every AI agent, script, or bot needs a named owner. Whether it’s through tagging, metadata, or policy-as-code, make sure someone’s accountable for each machine identity. That gives you decentralized control without losing traceability.
At the end of the day, governance shouldn’t be a blocker it should and can be an enabler. If you bake it into the platforms and processes that teams already use, it fades into the background. You get stronger security and a faster path to innovation.
AIT: How can forward-thinking enterprises extend governance frameworks beyond employees to the systems, APIs, and code that increasingly drive business?
Paul Walker: That’s the frontier of digital identity right now and it’s where many organizations are starting to realize their legacy governance models just don’t stretch far enough.
Historically, IAM frameworks were built with people in mind: onboarding employees, managing role changes, and terminating access. The key is recognizing that machine identities APIs, service accounts, AI agents now outnumber human users and often carry more risk. Forward-thinking enterprises are extending governance by treating these non-human identities as first-class citizens: assigning owners, enforcing lifecycle controls, and applying real-time monitoring.
In short, forward-thinking enterprises aren’t just securing systems they’re governing trust at scale, across humans and machines alike.
AIT: AI-driven applications add another layer of complexity. What unique challenges do they introduce when it comes to identity governance, and how can organizations prepare?
Paul Walker: AI-driven applications, especially those using agentic AI, introduce a new class of non-human identities that are autonomous, dynamic, and often short-lived. The challenge is they can initiate actions, access sensitive data, and make decisions — yet many organizations lack visibility or controls over how these identities are created, used, or monitored.
To prepare, enterprises need to treat AI agents like any other identity: assign ownership, enforce policy-as-code, and implement automated lifecycle management. Most importantly, identity governance must shift from static, human-centric models to real-time, risk-aware frameworks that can adapt to the velocity and complexity AI brings.
AIT: Many CISOs and CTOs say visibility is their biggest hurdle. What role do automation and intelligent tooling play in gaining real-time visibility into machine identities?
Paul Walker: Automation and intelligent tooling are absolutely essential. Machine identities especially those tied to APIs, containers, and AI agents are created at such speed and scale that manual tracking is impossible. Without automation, you’re flying blind.
Intelligent tools can continuously discover, classify, and monitor machine identities across cloud and on-prem environments, giving security teams a real-time view of what exists, what it’s doing, and whether it’s compliant.
In short, automation turns visibility from a point-in-time audit into a living control layer.
Recommended: AITech Top Voice: Interview with Jonathan Kvarfordt, Head of GTM Growth at Momentum.io
AIT: As regulatory requirements evolve, how would you see compliance frameworks adapting to include non-human identities, and what should organizations be doing now to stay ahead?
Paul Walker: We’re already seeing regulators expand their focus from human access to all digital identities including APIs, bots, and AI agents. Frameworks like DORA and NIS2 are raising the bar on accountability, requiring organizations to prove not just who has access, but what including non-human actors.
To stay ahead, organizations should act now to integrate machine identities into their identity governance programs. That means assigning ownership, enforcing lifecycle policies, and ensuring access can be audited and explained. If you can’t answer “who owns this service account and why does it exist?” that’s a compliance gap waiting to happen.
Proactive governance today is what keeps tomorrow’s audit painless.
AIT: Looking ahead, where do you see the biggest breakthroughs happening in identity governance, particularly in balancing security with business agility?
Paul Walker: The biggest breakthroughs will come from making identity governance both smarter and easier to use. On one side, we’ll see AI-driven automation and context-aware controls replace static reviews. Access decisions will adapt in real time to risk, without endless manual approvals. That’s how you balance strong security with the speed the business demands.
On the other side, the user experience will radically improve. Natural language interfaces will let employees, managers – even auditors, interact with IGA systems conversationally: “Show me who has access to this app” or “Revoke the contractor’s privileges today.” That eliminates the friction that often slows adoption.
And then there’s the machine identity challenge. With APIs, workloads, and AI agents outnumbering people, the real leap will be in governing non-human identities at scale. Breakthroughs in discovery, ownership mapping, and automated lifecycle management will finally bring machine identities under the same governance umbrella as humans.
In short, the future of IGA will be defined by AI-enabled simplicity, better user experiences, and full-spectrum coverage of both human and machine identities delivering security and agility at the same time.
AIT: As someone deeply engaged in this emerging challenge, what advice would you offer to technology leaders preparing their organizations for the next wave of identity risk?
Paul Walker: My advice is simple: widen your lens. Identity risk is no longer just about employees and contractors it’s about APIs, workloads, bots, and now AI agents. These non-human identities are multiplying faster than most organizations can track, and they’re already being exploited.
First, treat machine identities as first-class citizens. Give them clear ownership, apply lifecycle policies, and bring them into your IGA program.
Second, lean into automation and intelligent tooling. Manual spreadsheets and quarterly reviews won’t cut it when identities number in the millions. Use discovery, analytics, and policy-as-code to enforce governance continuously not just at audit time.
Third, don’t underestimate the user experience angle. If governance feels like a burden, people will bypass it. Conversational AI and natural-language interfaces are making identity governance more intuitive, helping managers and auditors engage without friction.
Finally, remember: identity is now the control plane of zero trust. If you prepare your governance strategy to handle humans and machines with the same rigor, you’ll not only reduce risk you’ll build the agility to safely embrace the next wave of digital transformation.
AIT: Finally, who in the cybersecurity or AI governance space would you love to see featured next in the AI Technology Top Voice interview series?
Paul Walker: Sean Koontz, Austin Texas
Thank you, Paul, for sharing your insights! We look forward to continuing the conversation in future editions of our Top Voice Series.
Recommended: AITech Top Voice: Interview with Court Watson, Partner, Risk and Financial Advisory, Controllership at Deloitte
To share your insights, please write to us at sudipto@intentamplify.com





