Sage and PwC commit to tackling AI trust gap in finance

Sage today announced a new initiative in partnership with PwC, which will redefine how AI is built and adopted in finance, combining transparent, explainable AI with the governance and real-world expertise required to use it with confidence.

The initiative, “Beyond the Black Box”, was announced at Sage Future, and is backed by new research from Sage, conducted by IDC, showing that more than seventy percent of finance leaders (71%) would reject an AI system if it cannot explain its outputs, even if they are highly accurate, showing that trust, not technology, is holding back AI adoption. 

Unlike previous AI initiatives that have focused on large enterprises or purely technical audiences, “Beyond the Black Box” was created with SMB realities at its core. It forms part of Sage’s commitment to helping more SMBs benefit from the transformative impact of AI, building upon the company’s Responsible AI framework and AI Trust Label, reinforcing the belief that trust must be built into AI from the outset.

Trust, not technology capability, is the biggest barrier to AI adoption in finance
As AI becomes more capable, the ability to explain and stand behind its outputs is emerging as the defining factor in whether it is trusted and adopted in finance.

The consequences are already measurable. Finance professionals are spending an average of 12.9 hours every week reconstructing, validating and defending AI outputs. Much of this work stems from the need to validate and explain outputs that do not clearly show how they were produced. Rather than removing overhead, opaque AI is creating a new category of it.

Sage describes this as the trust cost of AI – the gap between what AI systems promise in theory and what finance teams can actually rely on in practice. At its core, this is a transparency challenge. Every number, recommendation and AI-supported decision must be explainable to auditors, to boards, and to regulators. When it cannot be, adoption stalls.

From black box AI to glass box
Sage has designed its AI from the ground up for the realities of finance, where every output is transparent, explainable and accountable, so organizations can trust and act on it with confidence.

This represents a deliberate shift away from black box AI, where outputs are generated without visibility into how decisions are made, towards what Sage describes as glass box AI: customers can meaningfully interact with AI results – not blind faith. Every answer is explainable, every recommendation is verifiable, and every output can be interrogated.

Through the initiative, Sage and PwC will combine their expertise into practical tools and frameworks to help finance teams understand, assess, and adopt AI responsibly. This includes embedding trust into how AI is implemented in finance environments while building on Sage’s existing commitment to SMBs, including the Sage AI Academy, which supports organizations with the knowledge and guidance needed to adopt AI with confidence.
 
From pilot to practice
To help move organizations from AI experimentation to trusted, scalable adoption, Sage selected PwC as its lead partner, drawn by PwC’s proven expertise in deploying AI across its own business. PwC has embedded AI into day-to-day workflows at scale, with 86% of its employees actively using AI tools, more than 240,000 Microsoft Copilot licences deployed, and over 4,000 custom GPTs developed and reused across the firm.

Businesses are increasingly concerned about the probabilistic nature of AI systems, particularly the lack of transparency, explainability, and clear accountability behind AI-generated outputs. Together, Sage and PwC will build transparent AI that gives finance teams control and full visibility into its outputs, backed by the implementation expertise, governance frameworks, and risk management capabilities required to put that AI to work safely, effectively, and at scale.

Leave a Reply

Discover more from The IT Nerd

Subscribe now to keep reading and get access to the full archive.

Continue reading