Welcome to the AITECH Top Voice Interview Series, where we bring you conversations with the leaders shaping the future of AI, enterprise technology, and digital innovation.

Today, we’re joined by Kate Shen, Co-founder of Anaxi Labs—a company working at the intersection of AI, security, and the evolving economics of intelligent systems.

In this conversation, we’ll explore how AI is moving beyond models into real-world, agent-driven ecosystems—where trust, governance, and security are becoming the true differentiators.

We’ll also unpack what it takes to build AI systems that are not just powerful, but reliable, auditable, and enterprise-ready.

Let’s dive in.

Here’s the full interview.

AI Technology Insights (AIT): As foundation models commoditize, do you see AI-driven trust, security, and governance layers becoming the primary differentiator? How is Anaxi Labs positioning itself to lead that layer?

Kate Shen : The differentiating layer shifts decisively toward trust, security, governance, and the economics of AI systems. Anaxi Labs is building at exactly that intersection.

Since founding, Anaxi Labs has held a strategic partnership with CMU’s CyLab – the university’s security and privacy institute – to study cryptographic tools and their applications in building secure systems. The collaboration has recently expanded to include research into the emerging economic foundations of AI. That combination of security infrastructure and economics design is the space we’re staking out.

Recommended: AITech Top Voice: Interview with Mike Pritchard, Director of Climate Simulation Research at NVIDIA Research

AIT: That research finds that even in decentralized systems, AI agents consistently concentrate traffic toward a small number of high-performing sources. What does that mean for the economics of the agentic web?

Kate Shen : The assumption behind decentralized agent architectures is that they distribute attention more evenly, but the data suggests agents converge on the same small set of sources regardless of design intent. That has significant economic implications.

If agents rather than users become the primary allocator of web traffic, value accrues not just to whoever owns data, but to whoever gets selected by agents repeatedly. The competitive dynamic shifts from traditional SEO toward something closer to agent optimization, and the long tail of publishers faces real structural pressure. Anaxi Lab’s work on AI economics is directly relevant here, because the pricing and incentive structures of agent-mediated systems will determine whether that concentration is a permanent feature or something that can be designed around.

AIT: The research also shows that while agents often retrieve the right information, that doesn’t reliably translate into the right answer. Where does that gap come from, and how significant is it?

Kate Shen : Retrieval accuracy and answer accuracy are distinct problems, and conflating them has led to overconfidence in current agent benchmarks. The findings from our recent research point to specific gaps in planning and answer synthesis – the agent finds the right data but fails to reason over it correctly, or integrates it poorly across multiple sources.

That gap is significant for commercial deployment. An agent that retrieves correctly but synthesizes badly still produces wrong outputs, and in high-stakes enterprise contexts the consequences are real. Test-time scaling improves performance on both dimensions, and multi-agent coordination – while lagging centralized retrieval today – closes the gap as model scale increases. The implication is that architecture and coordination design matter as much as raw model capability.

AIT: As AI evolves into autonomous, agent-to-agent ecosystems, how will you embed AI-native security (self-verifying agents, adaptive trust scoring) to prevent malicious or compromised agents from participating?

Kate Shen : AI-native security in autonomous ecosystems starts with hardware-rooted cryptographic identities and zero-knowledge verification frameworks that ensure malicious agents cannot go unnoticed. Trusted Execution Environments (TEEs) generate cryptographic attestations proving the exact code and model running within an enclave, effectively making agent identity tamper-evident at the hardware level.

This is supported by the “Agent Passport,” a standardized digital credential secured by ECDSA P-256 key pairs, which establishes self-verifying identities across platforms. That identity layer then feeds into an adaptive trust scoring model that continuously evaluates an agent’s operational reliability, behavioral consistency, and code attestation over time.

AIT: With AI models increasingly dependent on dynamic, external datasets, how will you use AI to detect data poisoning, hallucination amplification, and adversarial inputs in real time?

Kate Shen : A multi-layered defense strategy centered on semantic firewalls and continuous AI observability is the most resilient approach here. Semantic firewalls – such as NVIDIA NeMo Guardrails – use smaller evaluator models to classify the intent of natural language prompts in real time, intercepting and blocking malicious patterns like prompt injections before they reach the core agent.

Continuous observability platforms work in parallel, establishing baselines for normal activity to detect hallucinations and automatically triggering a “circuit breaker” when abnormal spikes in API activity or token consumption occur.

Recommended: AITech Top Voice: Interview with Rachel Laycock, Chief Technology Officer at Thoughtworks

AIT: As monetization shifts toward usage-based and outcome-driven AI systems, how will you ensure AI-driven pricing models remain transparent, auditable, and resistant to manipulation or gaming?

Kate Shen : Anaxi Labs is focused on building pricing models where auditability is cryptographic, not just procedural. A specialized compiler framework automatically generates Zero-Knowledge (ZK) proofs for complex software, allowing an AI system to prove that a specific price was calculated using predefined logic – without exposing proprietary data or competitive intelligence.

That technical layer is paired with strong institutional governance structures to ensure interpretability at every level of the stack. Dynamic pricing systems built this way are structurally resistant to manipulation, because the proof of correctness travels with the output.

AIT: In a marketplace of prompts, agents, and workflows, how will you leverage AI to continuously monitor, score, and quarantine risky behaviors (e.g., prompt injection, model abuse, anomalous agent actions)?

Kate Shen : A sophisticated five-dimension trust scoring model continuously evaluates agents and immediately penalizes anomalous actions. If an agent’s trust score drops below a specific threshold, due to unusual transaction magnitudes or high request velocities, for example, it’s automatically quarantined and its execution privileges suspended. This automated monitoring prevents model abuse from cascading through the ecosystem.

Secure execution of AI-generated code is handled through ephemeral sandboxing via Firecracker microVMs, which instantly destroy any malicious processes upon task completion.

AIT: As enterprises expose proprietary data to AI systems, how will you design AI-aware access control systems (context-aware, behavior-driven, least privilege) that evolve with usage patterns?

Kate Shen : Static permissions are structurally inadequate for AI workflows. The right approach is context-aware access control that enforces least privilege dynamically – evaluating the specific context of every request and granting only temporary access for the duration of a given task.

Behavioral analysis continuously establishes normal operational baselines for each agent. If an agent attempts to access sensitive proprietary data unexpectedly or at an unusual time, the system automatically denies the request or escalates it for human review. Access rights, in other words, evolve as usage patterns do.

AIT: With AI supply chains becoming more complex, how will you apply AI to map, audit, and secure multi-agent dependencies, ensuring resilience against cascading failures or attacks?

Kate Shen : Self-healing subsystems that actively maintain a live model of all current multi-agent dependencies are central to this. Continuous ping-echo mechanisms rapidly detect node failures or compromises within the supply chain, and upon detection, the system takes corrective action automatically, electing new master agents or severing communication with compromised nodes.

Formal verification using timed automata provides the mathematical guarantee that these adaptive, self-healing behaviors always lead to a safe operational state. Resilience here is provable, not assumed.

Recommended: AITech Top Voice: Interview with Jeremy Burton, CEO at Observe

To share your insights, please write to us at info@intentamplify.com