Trust as a differentiator: How to build reliable AI tools for customer success
The chance for AI to empower small and medium businesses, and accountants, is huge. Discover why trust can help to unlock that potential.
Artificial intelligence (AI) is revolutionising the way the accounting industry works.
It’s elevating human performance by taking on the burden of repetitive but required tasks, and by accelerating analysis.
But accounting is based on trust.
So, companies must be totally sure that AI works accurately before they can use it to its full potential.
Even the most innovative AI tool is only useful if it’s being used in the right way. And people are only comfortable transitioning work into the hands of technology if they trust it will do the job safely and competently.
A recent KPMG study, for instance, found that 61% of people are wary about trusting AI systems.
So, how do you instil trust in the AI designed to support the role of accountants?
It’s about taking a responsible, humble approach to AI development while working in collaboration with customers throughout the process, ensuring they have faith not only in the technology, but also in the company behind the technology.
Here’s what we cover in this article:
- Humility and accountability
- Ethical considerations
- A customer-centric, trusted approach
- Users must feel in control
- Building trust at every stage
Sage Ai
Discover how you can use Sage Ai to make smarter, faster decisions, get a dynamic, real-time view of your business performance, and gain access to actionable insights.
Humility and accountability
AI can be an enormous force for good, but its speed and scale of operations mean it equally has the capacity to do harm.
And—irrespective of which process is automated or how accurately—accountability for results, a core element of accounting, will always reside with humans.
It’s about finding the balance.
One in which AI seamlessly blends into workflows, enhancing them, while human guidance and contributions remain essential to the process.
It’s OK to admit nobody knows all the answers.
When I began my journey with AI seven years ago, I didn’t fully grasp the extensive impact that even the most harmless-seeming applications could have.
That’s why developers need to take a step back and explore all potential outcomes of creating a product to solve a business challenge.
For instance, Sage might create an AI tool that can speedily provide credit scores for small businesses.
However, we’re not thinking about doing this right now because there’s a risk that the AI might be biased. This means it could unfairly give lower scores to businesses owned by women or minorities.
At Sage, we’re cautious about using AI in ways that could unintentionally harm certain groups because we want to ensure fairness for all businesses.
Ethical considerations
Getting to a point where AI can provide these insights starts by being deliberate about what you are allowing AI to do.
And articulating clear principles that define what you will and will not do with AI.
This builds customer trust as they understand what types of problems you’ll use AI to address.
Instilling clear principles from the outset helps to mitigate potential bias arising in AI. It guides developers in choosing which problems we can solve with AI that don’t carry societal risk, and as a result can help to avoid any unwanted consequences of supplied AI insights from training data and data sources.
Using AI auditors to identify particular areas to address in your AI development process can also combat possible biases. As can investing in recruiting diverse talent into development teams to build in diversity of thought and lived experience.
Diversity remains an issue in the AI industry, with fewer than 25% of AI employees identifying as a racial or ethnic minority in a 2022 McKinsey study.
Meanwhile, women accounted for just 26% of workers in data and AI globally in 2021 and it’s safe to say that hasn’t evened out in 2024.
Sage Copilot
Learn about our new generative AI-powered assistant that tackles your to-do list, automates tasks, and recommends ways to help you make savings and drive improvements.
A customer-centric, trusted approach
It only takes one error to lose an accountant’s trust—especially in small businesses where even tiny mistakes can cause big problems.
That’s why it’s super important that when accountants start using technology to automate jobs that people used to do, they have confidence it’s going to work correctly.
At Sage, we’re creating AI that works as well as a human does.
Involving the people who’ll actually use this AI in the development process helps us to make sure we’re on the right track.
By working closely with our customers, we can really understand what they need and where they’re finding our AI helpful, and use their feedback to make the AI more fit for purpose.
For accountants, this frees up more time for strategic leadership–letting AI handle the routine stuff as well as crunching the numbers to give useful insights.
Take our general ledger outlier detection as an example. Accountants don’t have time to look through thousands and thousands of transactions, so this product identifies anomalous transactions for review.
At the outset, we thought it would be valuable to flag every single problem immediately. But this was interrupting employee workflow.
So, guided by our customers, we redesigned it to provide grouped outliers presented as potential problems.
Even if there’s only one real problem in a list of 100 the AI provides, that delivers value because it might otherwise have been missed.
And, by reacting to actual customer needs, we show them we understand how they work and make decisions with the aim of boosting their efficiency–two key elements in building trust.
Users must feel in control
Developing AI is about more than understanding how humans make a decision or perform a task. It’s about considering human emotions too.
For example, AI has the potential to pay vendors with no human review. But that takes a leap of faith most users are not willing to make.
It’s human intuition to question if technology is doing what it claims.
By understanding the emotional element, you can design a solution with the right degree of human oversight and control.
In the case of vendor payments, we’d need a significantly higher degree of confidence in our predictions and would be far more conservative about what decisions are automated.
Simply put, the more anxious an employee feels about getting something wrong, the more control they should feel across the process.
Building trust at every stage
AI is already propelling the accounting industry forward but as its adoption increases, and AI interactions become more obvious, trust in the technology is going to be even more pivotal.
This trust comes from a few things.
The AI must be accurate so people can rely on it to produce the right results. It’s about being honest about AI’s limitations and managing expectations.
Companies need to feel sure that the developers who make AI have thoroughly tested it and have carefully considered its impact on society.
It means being transparent, self-policing, and complying with regulations as they evolve.
And, importantly, it means understanding how people and AI fit together.
Now more than ever, the people developing AI and the people using it need to work hand in hand to make sure AI acts the right way.
The chance for AI to empower small and medium businesses is huge. Trust is the differentiator that can help unlock that potential.
Ask the author a question or share your advice