From compliance to controllership, AI is rapidly transforming finance and accounting, but one question continues to loom large: Can we trust it? To unpack the evolving trust gap in AI adoption within financial functions, we sat down with Court Watson, Partner at Deloitte, whose deep expertise in AI governance, finance transformation, and risk strategy has shaped how enterprise leaders adopt emerging technologies with confidence.
About Court Watson: Court brings two decades of experience advising on operational excellence and digital modernization. At Deloitte, he helps financial leaders deploy emerging technologies, including AI and machine learning, safely and strategically across controllership, compliance, and enterprise risk functions.
About Deloitte: As one of the world’s largest professional services firms, Deloitte serves clients in more than 150 countries, offering cutting-edge capabilities in audit and assurance, consulting, financial advisory, and risk management. The firm’s Trustworthy AI framework provides a rigorous model for developing and implementing AI systems that are transparent, ethical, and aligned with both business goals and societal expectations.
Here’s the full interview.
AI Technology Insights (AIT): What are the key drivers behind the current trust gap in AI adoption within finance and accounting functions? How much of this stems from technological limitations versus cultural or leadership hesitation?
Court Watson: The trust gap in AI adoption stems from a combination of technological and governance factors. From a technology perspective, concerns about transparency and the robustness of AI models can often undermine confidence—something which is incredibly important for a function that manages financial reporting and analysis. In fact, a new Deloitte Center for Controllership poll of financial professionals found that trust in the underlying data and programming of AI agents is a leading barrier to adoption. Governance of AI – or a lack of it – can also magnify trust issues. Often, this involves a lack of clear policies, insufficient training, and a general lack of familiarity with the technology’s capabilities and its limitations. Fortunately, trust is not static and leaders can increase organizational confidence in AI tools by establishing special safeguards and quality controls to govern them.
AIT: Given that nearly 60% of professionals only trust AI within defined frameworks, how should organizations approach governance and risk management when deploying agentic tools?
Court Watson: By proactively managing risk and clarifying roles and responsibilities, organizations can build greater confidence in agentic tools while safeguarding business integrity. Organizations should define clear boundaries for AI decision-making to ensure judgment-based or high-risk decisions are subject to human review. Additionally, establishing controls and oversight throughout the AI lifecycle, from model development to ongoing monitoring, can help organizations limit risk. Structured frameworks such as Deloitte’s Trustworthy AI principals can improve user confidence by ensuring AI solutions are transparent, explainable, fair, and secure. Lastly, it’s important to remember that agentic AI is evolving and implementation journeys are a learning process. It’s imperative that organizations regularly review and update governance policies as both technology and regulatory landscapes evolve.
Recommended: F-Secure VP Dmitri Vellikok on AI in Cybersecurity – Expert Interview
AIT: In your view, what role should human oversight play in a future where AI tools are integrated into high-stakes financial decisions?
Court Watson: AI is not perfect and likely never will be, meaning that human oversight is and will remain essential throughout the AI lifecycle, particularly where financial decisions, financial reporting, regulatory compliance, scenario analysis, or other tasks that could impact the financial strategy or health of an organization are at play. AI excels at processing large volumes of data and automating routine tasks, which is part of what supports the strong use cases for a data-heavy function like finance and accounting—however human expertise remains necessary to interpret context, exercise ethical or other types of judgment calls, and address ambiguities.
AIT: Is there a model Deloitte recommends for human-AI collaboration in controllership?
Court Watson: Human oversight of AI tools is a critical piece of Deloitte’s Trustworthy AI framework, which emphasizes how maintaining a “human in the loop” is critical for tool oversight and responsible intervention in AI systems, especially concerning critical decision-making processes. Our framework places people at the center of AI oversight to ensure that critical decisions benefit from both computational rigor and human insight.
AIT: What best practices have you seen among organizations that are successfully building trust in AI within their finance teams? Are there change management strategies Deloitte considers critical for adoption?
Court Watson: Organizations that we see successfully building trust in their AI tools typically take a strategic approach in their adoption journeys. Governance is a foundational element that should ideally happen before implementation, and should include defined roles and responsibilities, transparency and reporting guidelines, education and training protocols, robust data governance, and quality controls geared at enabling successful AI use down the line. Being measured and purposeful in adopting AI – as with any other transformation effort – is also important to success. In that vein, companies should consider exploring use cases, piloting programs in lower-risk areas, and gradually expanding tools as key measures of success are achieved, including KPIs, demonstrable value, and organizational confidence.
From a change management perspective there are a few considerations. For one, it’s crucial that finance teams don’t act in a silo when implementing AI. Cross-functional collaboration with other areas of the business – including technology and risk teams – is important for building AI solutions that can withstand legal, regulatory, or other types of scrutiny. Within the finance teams that use these tools, it’s then critical to have the CFO as an AI champion, establish feedback loops, celebrate early wins, and resolve any cultural hesitancies (remember, finance and accounting as a function is still a heavy user of spreadsheets) to ensure broad-based buy-in.
Recommended: AITech Top Voice: Interview with Chris Corrado, CEO at Squirro
AIT: How can finance leaders measure and communicate the ROI of AI adoption while still managing the underlying trust concerns from their teams and boards?
Court Watson: AI implementations, especially if they involve a privately built and privately managed large language model, can constitute significant investments for organizations, making ROI an important measure of success for these projects. Measuring and communicating ROI for these tools begins with trust and adoption by finance teams in order to produce demonstrable metrics of success – like efficiency gains, error reduction, faster board-level reporting, and speedier decision-making – that can be reported up to the board or investors. Trust among key stakeholders can further be reinforced or built through routine AI audits, controls reporting, and ongoing risk assessments.
AIT: Looking ahead, what ethical and regulatory considerations do you believe CFOs and controllers must address as AI tools become more autonomous and influential in business decisions?
Court Watson: As AI continues to evolve, CFOs and controllers should establish and follow a defined framework that continually addresses ethical and regulatory considerations such as transparency, fairness and bias mitigation, data and privacy security, and regulatory compliance. By embedding these considerations into AI strategies from the outset and keeping a continuous pulse on them, finance leaders will be better able to position their organizations for responsible – and possibly faster, safer – innovation.
AIT: Tag a person from the industry who you would like to feature in our Top Voice interview series:
Court Watson: I’d suggest connecting with one of our Finance SGO leaders, like Ed Hardy.
Recommended: AITech Top Voice: Interview with Chon Tang, Founding Partner at SkyDeck Fund
To share your insights, please write to us at sudipto@intentamplify.com





