As enterprises move from isolated AI experimentation to fully operational agentic systems, technology leaders are rethinking how intelligence, data, and software delivery converge to reshape business operations. The next phase of AI adoption is no longer about deploying models—it is about rewiring the enterprise so intelligent agents can operate across workflows, decisions, and customer interactions at scale.

In this edition of the AITech Top Voice Interview Series, Sudipto Ghosh, Global Head of Marketing and Sales at Intent Amplify, sits down with Rachel Laycock, Chief Technology Officer at Thoughtworks, to discuss how Agent AI is transforming enterprise technology foundations, customer experience, and software delivery.

Drawing insights from Thoughtworks’ Looking Glass research, Rachel explains why organizations must move beyond AI pilots to agent-driven architectures powered by trustworthy data ecosystems, multimodal experiences, and responsible governance frameworks. The conversation explores how enterprises can rebuild core technology systems, operationalize AI-first software delivery, and design human-AI collaboration models that drive measurable business outcomes while maintaining trust and transparency.

This interview aims to provide enterprise technology leaders, AI strategists, and CX innovators with a practical perspective on how agentic AI will redefine the operating model of modern organizations over the next decade.

Here’s the full interview.

AI Technology Insights (AIT): Hi Rachel, welcome to the AITech Top Voice interview series. To begin, could you briefly share your role at Thoughtworks and how Agent AI fits into your broader vision for the enterprise?

Rachel Laycock: As Chief Technology Officer at Thoughtworks, I lead our technology strategy and help clients navigate the forces reshaping how organizations build, run and evolve their technology estates. Thoughtworks is a global technology consultancy with more than 30 years of experience blending design, engineering and AI to help clients solve their most critical challenges. Agent AI is central to our vision because we’re at a moment of transition. Long-running shifts in platforms, data, security and experience design are converging with rapid advances in AI. The result is a reconfiguration of how technology creates value across the enterprise.

In our Looking Glass report, we explore what this looks like in practice: how enterprises rebuild core foundations, rewire workflows to support greater autonomy and rethink the role technology plays in customer experience, decision-making and operations.

Recommended: AITech Top Voice: Interview with Mike Pritchard, Director of Climate Simulation Research at NVIDIA Research

AIT: How are enterprises rebuilding their core foundations with Agent-based workflows?

Rachel Laycock: Most enterprises still treat AI as a set of isolated experiments, and that mindset is already outdated. The question is not how many models you can deploy but how quickly you can rewire your business so agents can operate across workflows. The companies that will win are the ones that rebuild core processes so intelligence can move freely, act autonomously and deliver outcomes without waiting on human bottlenecks. Operationalizing AI means designing architectures where agents can execute work with transparency, guardrails and continuous improvement. If your environment can’t support that, you’re still in experimentation mode.

In our Looking Glass report, we highlight that agentic systems are now orchestrating multi-step business processes from customer support to DevOps, and the overall orchestration market is forecast to nearly triple to over $30 billion by 2030. The key enabler is data infrastructure, with product-centric, federated ecosystems supplying trustworthy, real-time data to both humans and intelligent agents.

AIT: What are the core principles that support greater autonomy and digital transformation in customer experience?

Rachel Laycock: Interactions between humans and machines have expanded well beyond text to include voice, images, gestures and emotional cues. In our Looking Glass report, we describe how experiences are being built around agentic interfaces that take initiative, adaptive systems that sense emotion and environment, and embodied modalities fluent in voice, gestures, gaze and haptics. The focus is shifting from designing interfaces to designing relationships between humans, AI agents and the systems around them. Three principles guide this transformation. First, think beyond channels to create interaction ecosystems, because the standard mix of physical and digital experiences alone won’t meet customer expectations. Second, design for multimodality, incorporating interactions from haptic feedback to voice commands as standard practice. Third, establish governance and ethical boundaries, ensuring human involvement where emotional AI carries risk while maintaining customer trust.

AIT: Tell us more about AI’s role in software delivery. Why did you choose to highlight AI-First Software Delivery (AIFSD) in your report?

Rachel Laycock: AI-First Software Delivery integrates generative and agentic systems across the entire software lifecycle, from requirements and design through development, testing, deployment and maintenance. We highlighted it in our Looking Glass report because the real shift underway is less about autonomy and more about addressing the long-standing structural challenges that hold enterprises back.

AI is being leveraged to rebuild the core of software delivery: modernizing legacy systems, improving architectural integrity, strengthening quality and stabilizing pipelines. AIFSD works best when human engineers and AI collaborate, with AI handling repetitive, scaffolding and optimization tasks while humans ensure accuracy, security and architectural integrity. Without rigorous engineering oversight, AI systems risk introducing technical debt or vulnerabilities. For developers, this change will resemble the shift from assembly languages to high-level languages: transformative in how we approach the work, while continuing to rely on skilled engineers to guide the process.

AIT: 2025 is often described as a turning point for AI-led customer service. What specifically changed that year in terms of customer expectations, AI maturity, and enterprise readiness that enabled large-scale adoption of solutions?

Rachel Laycock: Several factors came together. On the AI maturity side, frameworks from providers like OpenAI and Anthropic expanded the role of agents from passive assistants with limited capabilities to adaptable collaborators that can learn from past interactions, reason through optimal outcomes and coordinate to deliver them. Advances in contextual intelligence, including emotion and context-sensing capabilities combined with real-time data, enabled systems to respond more empathetically to users. Customer expectations evolved in parallel. People grew more comfortable interacting with AI-powered systems and began demanding context-aware, emotionally intelligent responses. And enterprise readiness caught up as organizations strengthened data infrastructure and governance frameworks, allowing AI to be deployed at scale with confidence.

AIT: Early AI and automation efforts in contact centers didn’t always succeed. What key lessons did Thoughtworks learn from those experiments, and how did they shape the decision to combine AI voice automation with human expertise?

Rachel Laycock: Early AI and automation efforts taught us that technology alone doesn’t solve service problems. Many of those initial deployments focused on deflecting calls rather than improving outcomes, and customers noticed. They felt managed, not helped. The biggest lesson was that automation succeeds when it’s designed around the customer’s intent, not the organization’s cost targets. You need to understand what the customer is actually trying to accomplish and then decide where AI can accelerate that and where a human is essential. That’s why we see the most effective approaches combining AI voice automation with human expertise rather than treating them as competing options. The AI handles repetitive inquiries and context gathering, while human agents focus on complex or emotionally sensitive situations where judgment and empathy matter most.

Recommended: AITech Top Voice: Interview with Jeremy Burton, CEO at Observe

AIT: Many enterprises struggled with chatbots and IVRs in the past. How did those experiences influence the voice-first, context-aware design?

Rachel Laycock: The early chatbot and IVR experience was a useful reality check. Those systems were rigid and rules-based. They forced customers into predefined flows and broke down the moment a request fell outside the script. That frustration shaped a clear design principle: build for the customer’s intent, not for a decision tree. Voice-first, context-aware design flips the model. Instead of asking the customer to navigate a menu, the system interprets what they need and responds accordingly. What makes this generation different is the underlying technology. Large language models and agentic frameworks can now handle ambiguity, draw on conversation history and adapt in real time. That said, the experience layer still matters enormously. As we discuss in our Looking Glass report, designing interactions now means designing relationships between humans, AI agents and the surrounding systems, not just optimizing a single interface.

AIT: Businesses often worry that automation may reduce the human element. How can AI-led workflows enhance personalization and trust—particularly in high-value B2B relationships?

Rachel Laycock: The fear that automation removes the human element usually stems from poorly designed automation. When AI is implemented well, it actually frees people up to be more human in the moments that count. In high-value B2B relationships, the real risk isn’t too much technology. It’s wasting your best people on tasks that don’t require their expertise. AI-led workflows can handle data gathering, routine follow-ups and pattern recognition so that when a client needs strategic advice or a difficult conversation, the human who shows up is fully informed and fully present. That builds trust, not erodes it. The key is transparency. Clients should always know when they’re interacting with an AI system, what it can and can’t do, and how to reach a person. Personalization in the AI era isn’t about replacing relationships. It’s about making every human interaction more valuable because the system has done the groundwork.

AIT: With 24/7 AI voice agents operating on enterprise data, governance becomes critical. How should organizations think about transparency, accountability, and responsible AI in service operations?

Rachel Laycock: As AI becomes more widespread, ethical use and strong governance move from aspiration to operational discipline. In our Looking Glass report, we describe responsibility as something that needs to be embedded into technology strategy, architecture and delivery, covering safety, privacy, security, environmental impact, accessibility and social outcomes.

For service operations specifically, three things matter.

First, computational governance: codifying policies as system-enforced controls so automated compliance works alongside human oversight. Second, accountability and transparency: conducting impact assessments, maintaining clear data lineage and establishing incident reporting processes. Third, human oversight needs to go beyond the generic idea of a person in the loop. It should involve role-specific supervision with clear escalation paths for autonomous agents. With Gartner predicting that roughly half of governments will mandate responsible AI practices by 2026, organizations that get governance right early will gain a competitive edge and strengthen customer trust when it matters most.

AIT: Looking ahead to 2026, what will define a best-in-class AI-powered service organization? What is the future of AI-ready data ecosystems?

Rachel Laycock: Leading organizations treat AI as foundational rather than a side project. A data platform alone won’t make an enterprise competitive, especially if it relies on a centralized lake that can’t adapt to AI-era demands. In our Looking Glass report, we describe the future as dynamic, composable ecosystems where modernized data, processes and logic form modular building blocks that teams and agents can reuse, combine and evolve. These ecosystems provide the enabling layer for agentic systems, grounding them in high-quality data, governed access and clear lineage. By 2030, enterprises will operate as interconnected systems with intelligence flowing across every process, platform and product. The challenge will shift from scaling AI to governing it sustainably. We’re already seeing AI agents emerge as autonomous data stewards, monitoring lineage, assessing data quality and suggesting improvements. That convergence of AI operations and data governance points toward self-improving ecosystems that continuously evolve.

Recommended: AITech Top Voice: Interview with Dana Simberkoff, Chief Risk, Privacy, and Information Security Officer at AvePoint

To share your insights, please write to us at info@intentamplify.com