Data
Top takeaways: Balancing AI and governance in financial services
04 June 2025 • 3 min read

In a recent webinar on the role of AI in financial services, leaders from Schroders and Ecommpay shared their insights on where AI can truly deliver value, and how leaders can balance innovation with governance and customer protection. Here are the key takeaways:
1. Start with the problem, not the technology
Successful AI adoption starts by clearly defining the business problem. It’s important to understand where AI can make a difference versus where it’s not needed. Instead of chasing trends, organisations should ask themselves; what are we trying to solve?
This could be identifying where GenAI could improve customer engagement and operations, or particularly for financial institutions, AI tools that can assist in decisions made with large volumes of unstructured data.
2. Driving productivity and personalisation
At Ecommpay, AI will aid in merchant onboarding by extracting insights from lengthy documents, often in different languages. This streamlined data will then feed into decision trees, reducing manual tasks and accelerating the approval process.
For Schroders, they have deployed large language models (LLMs) to translate materials across multiple languages, cutting translation time and saving approximately £600,000 within just four months. With their net promoter score also increasing, they’ve realised a lot of benefits just by implementing a few tools that don’t require a lot of manpower.
In sales scenarios, Schroders’ AI agents analysed meeting notes to uncover client concerns, enabling salespeople to proactively tailor their proposals with precision. This made them better informed and more strategic in their approach—helping them build trust, deepen engagement, win more business, and position themselves as reliable long-term partners to their clients.
3. Trust, governance and human oversight are non-negotiable
Over the past decade, deep learning and ‘black box models’ (whose internal workings aren’t easily interpretable), have raised concerns among financial regulators.
If you are deploying AI in a way that affects your customer, like loan approvals or fraud investigations, you need to ensure the outputs are explainable and consistent. But AI should support, not replace, human expertise.
As AI is still in its infancy, it must be rigorously tested internally before any client exposure. Maintaining trust involves human oversight, bias checks and strong data governance frameworks. Without clear boundaries or feedback loops, models can’t learn from mistakes, making human intervention essential.
4. Empower teams through education and culture
Schroders created department-specific personas, to identify how AI could benefit each team, then developed personalised training materials to educate and demystify AI. Early adopters became internal champions, sharing success stories that have helped drive company-wide adoption.
Even in companies that don’t formally invest in AI, employees will still use tools to boost productivity, whether summarising meetings or writing code. It’s essential to provide encouragement and guidance on safe usage, prompt engineering and data privacy, balancing empowerment with control, without creating an environment of fear or ambiguity.
Advice for financial services institutions moving from experimentation to scale
The consensus: avoid the hype. Know what you’re solving for and why, otherwise you’ll be spinning up proof-of-concepts forever. Start with the problem, focus on the solution, and don’t assume AI is always the answer.
You can watch the full recording of the webinar here.
Join us for part 2 in the series: responsible AI in financial services
In our next webinar on 24 June, we'll be exploring responsible AI and considerations around customer data, privacy and ethical AI use for financial services.
Register to hear insights from experts including Sabine Scheepstra, Head of Digital Transformation at RABOBANK, and Sidrah Hassan, AI Ethicist at AND Digital.