Wednesday, August 27, 2025
HomeSoftwareFinTech 2025: Head-to-Head | Silicon UK Tech News

FinTech 2025: Head-to-Head | Silicon UK Tech News

Published on

spot_img



How are AI models reshaping consumer lending decisions today? 

“AI is revolutionising the consumer lending landscape, embedding advanced capabilities throughout the lending lifecycle. From predictive analytics that assess creditworthiness using open-banking data, to real-time fraud detection and process automation, AI models expedite approvals and reduce human error. These models can uncover complex, nonlinear relationships in data, far beyond what traditional statistical models like logical regression can detect.

“At Slalom, we’re witnessing the transformational impact of Generative AI, which introduces inductive reasoning, enabling lenders to transition from prediction and into problem-solving and personalisation. One of the more exciting frontiers is the use of AI to tap into previously ‘invisble’ borrower segments, or individuals with limited or no credit history.

“While companies like Petal and Upstart have already begun to analyse cash flow, education history or unemployment trends to assess creditworthiness, we’re also seeing the emergence of even more nuanced models. For instance, some lenders are now experimenting with smartphone metadata, like app usage or even typing speed, to augment loan risk profiles in unbanked regions.”

What risks do AI-driven financial services pose to underserved populations?

“AI holds great promise for expanding access to financial services, but for underserved populations, it can just as easily reinforce exclusion if not deployed responsibly. These communities are often underrepresented in traditional data sources, meaning AI systems may lack the visibility or the necessary context to assess them fairly. This raises the risk of algorithmic bias, where AI unintentionally replicates systemic inequalities from past financial decisions.

“Another critical concern is explainability. Many AI lending models operate as ‘black boxes’, making it difficult for individuals to understand or contest certain decisions. This is an issue that disproportionately affects those already facing barriers to financial inclusion. For someone denied a loan, not knowing why or how to improve their eligibility, only deepens the divide.

“This is where the ‘human-in-the-loop’ (HITL) approach becomes essential. By embedding human oversight into AI decision-making workflows, financial institutions can detect and correct biases early, provide clearer justifications for outcomes and offer recourse to customers. HITL is particularly important in areas like credit underwriting, where decisions have long-term financial consequences.

“Crucially, HITL is more about equity than it is about simply compliance. It ensures AI augments rather than replaces empathy, contextual judgement and the cultural awareness that underserved customers often require. The most responsible institutions are pairing HITL with fairness auditing, diverse training datasets and inclusive product design to safeguard customers against harm and promoting long-term trust.”

Is human oversight still necessary in automated financial services?

“Absolutely, human oversight remains indispensable. AI models are only as good as the data they’re trained on; historical bias or underrepresentation can lead to unintended discriminatory outcomes. Without human oversight, these issues can go undetected, which is leading many institutions to adopt a ‘human-in-the-loop’ approach to ensure that subject matter experts are involved in the training, testing and interpreting of AI outputs. This oversight helps flag potential bias, validate decision quality, and maintain accountability.

“Beyond compliance, humans bring something that AI itself cannot replicate, which is experience, intuition and context. Decision-making, especially in financial services, involves navigating ambiguity, assessing intent and balancing competing priorities, which are areas where humans must remain firmly in the loop. In high-stakes domains such as lending, fraud detection or investment advice, it is the lived experience of professionals that provides the critical lens for nuance, empathy and most importantly, fairness.

“Furthermore, variables such as flawed inputs, unexpected market shifts or adversarial attacks can lead to AI system failures. Human oversight, in such instances, acts as a safeguard, helping financial institutions respond quickly, recover effectively and importantly, maintain customer trust. When something goes wrong, customers want to be assured that a qualified person is monitoring outcomes, correcting errors and ensuring decisions are fair and explainable. Such a human presence reinforces transparency and reassures customers that their financial wellbeing is not solely in the hands of an algorithm.”

How are regulators approaching AI in credit scoring and underwriting?

“Regulators worldwide are taking a cautious, but structured approach to how artificial intelligence is used in credit scoring and underwriting, balancing innovation with the need to protect consumers from bias and opacity. At Slalom, we’re seeing a global shift toward embedding fairness, explainability, and accountability into AI-driven credit decisions. The goal is to harness AI’s potential to expand financial inclusion without compromising trust.

“The EU’s AI Act leads the charge with a sweeping framework that classifies credit scoring as a “high-risk” application. Lenders must meet strict requirements around transparency, data quality, human oversight, and system robustness. Even traditional statistical models may fall under the Act’s broad AI definition, prompting widespread adaptation, regardless of whether they use deep learning or simpler automation tools

“In the US, agencies like the CFPB and Federal Reserve are focused on fair lending, model governance, and ensuring consumers receive clear explanations in human terms —even when decisions stem from complex algorithms. Compliance with existing laws like the Equal Credit Opportunity Act remains non-negotiable.

“The UK is taking a more flexible, principles-based approach. Regulators are working closely with industry to ensure AI models are transparent and fair, while encouraging innovation under existing risk frameworks.

“Elsewhere, the UAE Central Bank has introduced detailed AI governance and credit risk regulations, mandating explainability and ethical use of AI in underwriting, what it terms a high impact area.  And in Singapore, the Monetary Authority (MAS) is setting global benchmarks through its FEAT principles and AI Model Risk Management guidelines, requiring rigorous validation and fairness metrics for high-impact models.

“Across jurisdictions, the message is clear: AI in credit must be responsible by design. Financial institutions that embrace this shift will be better positioned to innovate with integrity.”

What innovations are emerging from FinTech startups in AI personalisation?

“AI-driven personalisation has moved from aspiration to expectation. FinTechs have redefined the benchmark, embedding AI into the very fabric of their platforms to deliver hyper-personalised, real-time financial experiences. This shift has forced traditional banks and investment firms to rethink their operating models, accelerate digital transformation, and compete on customer intimacy, not just scale.

“Unlike incumbents, many FinTechs are AI-native by design. Personalisation isn’t a feature bolted on later—it’s a foundational capability. From onboarding to credit underwriting, these firms use AI to dynamically tailor user journeys based on real-time behaviour, financial goals, and contextual signals.

“Apps like Revolut now act as intelligent financial coaches, offering adaptive saving plans and investment nudges that evolve with user activity. In lending, Petal leverages alternative data and machine learning to personalise and accelerate credit decisions. Affirm uses AI to instantly tailor instalment loan offers based on purchasing behaviour and repayment history—expanding access while optimising risk. Meanwhile, Starling Bank stands out for its use of AI to deliver real-time spending insights, automate budgeting, and flag unusual transactions.

“Others are deploying AI-powered contact centres that analyse customer sentiment and intent in real time—enhancing satisfaction and enabling more personalised service at scale.

“In Wealth and Investment management, AI is reshaping how firms deliver value. Leading asset managers are using predictive analytics and behavioural modelling to tailor portfolio strategies, anticipate client needs, and personalise communications.

“Firms like BlackRock and Vanguard are investing in modular AI systems to support everything from portfolio optimisation to multilingual client engagement—blending scale with personalisation. This FinTech-led innovation wave has raised the bar. Financial institutions are now under pressure to match the agility, intelligence, and customer-centricity of their digital-native challengers and neo banks —or risk falling behind in a market where personalisation is no longer optional, but essential.”

Will financial institutions evolve into hybrid tech-finance entities by 2025?

“In many ways, they already are. The convergence of finance and technology is no longer a future trend, but rather a present reality. At Slalom, we see banks and insurers embedding technology into their DNA, not just to improve operations but to transform how they serve customers, launch products and respond to market dynamics. Additionally, banks are increasingly hiring software engineers and data scientists at scale, launching digital-only subsidiaries and adopting agile product development methods long associated with Big Tech.

“This evolution is being accelerated by AI. From AI-powered underwriting and robo-advisory platforms to personalised financial coaching and real-time fraud detection, technology is becoming central to how value is delivered in financial services. Cloud-native infrastructures, open banking APIs and machine learning models are foundational, rather than just being nice-to-haves.

“However, this transformation is more about cultural shifts than tool adoption. It requires rethinking governance, upskilling teams and building internal capabilities that allow financial firms to act like tech companies, with the regulatory discipline of financial institutions.  Crucially, it also demands a future-looking mindset, referring to the ability to anticipate disruption and build adaptability into every layer of the business. In a market defined by constant technological, regulatory and customer expectation shifts, the most successful institutions will be those that go beyond adopting digital tools to continuously evolve to meet what comes next.”

How can trust be maintained when AI becomes the front-facing “advisor”?

“Robo-advisors and AI-driven apps like Betterment and Cleo have already begun democratising financial advice, using machine learning to provide savings recommendations, suggest investment strategies and monitor spending patterns. Such tools deliver insights at a fraction of the cost of human advisers, with some requiring no minimum balance and charging less than 0.3% in fees. This opens investment planning to millions who were previously priced out.

“Yet cost-efficiency alone is not enough for trust. Trust in AI as a financial advisor hinges on three pillars, which are transparency, privacy, and oversight. Consumers must understand how AI reaches its conclusions, particularly when it comes to lending or portfolio management. Explainable AI, combined with clear data-use policies and privacy-by-design frameworks, will be critical to ensure that consumer trust and satisfaction are not only maintained but elevated. Financial services firms must also ensure that their AI tools operate in a responsible manner, not in a vacuum. Hybrid models that pair AI efficiency with human judgment are gaining ground, offering reassurance in more complex or nuanced financial decisions.

“Ulitmately, AI not only offers advice but shapes behaviour.  It’s critical for its role to be governed by fairness, accountability, and clarity. With regulatory support, such as the FCA’s AI innovation initiatives, the sector is definitely moving in the right direction. Done right, AI has the potential to transform financial guidance from a luxury into a baseline service, which builds confidence, improves financial literacy and fosters long-term financial wellbeing.”

Which parts of the financial value chain are still resistant to AI disruption?

“Capital markets are among the most AI-literate segments of finance, but adoption is patchy. While firms are experimenting with generative AI, autonomous agents and advanced analytics, the highly regulated nature of these markets and the probabilistic nature of AI introduce caution. Financial services have been built on deterministic systems, whereas with AI’s unpredictability, concerns regarding hallucinations and explainability barriers, there are justifiable regulatory and compliance questions.

“Fraud detection has seen legacy AI use, but adoption of advanced models is inconsistent. To truly unlock AI’s potential, a broader organisational shift is required. This means equipping teams with the right talent and re-engineering processes rather than just layering AI on top of existing workflows. This also calls for a closer integration between technology and HR functions, ensuring that people are prepared to work alongside AI as co-pilots and not as competitors.

“The financial services industry is on the cusp of moving from siloed pilots to full-scale operational change. However, unlocking value requires overcoming cultural inertia, aligning AI capabilities with business outcomes, and navigating a rapidly evolving regulatory landscape. With the right strategy, resistant segments within financial institutions can become the next frontier for innovation, rather than the last holdouts.

At Slalom, we recognise that the journey towards AI-driven transformation in financial services is not only about technology but also about redefining business models, processes, and cultures. With thoughtful implementation and strategic alignment, financial institutions can harness AI’s potential to create more equitable, efficient, and personalised experiences for all stakeholders. We’re excited to help guide this journey, ensuring that innovation serves as a force for positive change in the industry.”

Andrew Wright, Senior Director at Slalom.

Andrew leads Slalom’s Banking, Capital Markets and Asset Management portfolio in the UK and is Client Partner and Accountable Executive on key accounts. He has extensive experience in international banking (including roles in front office sales, risk, product development, strategy and marketing) and FS Consulting.  He specialises in customer strategy and digital change and has a passion for partnering clients and their technology providers as they explore and execute on new growth initiatives to drive revenue. 



Source link

Latest articles

Bhagwati Products plans new manufacturing facility, aims ₹15,000 crore revenue in FY 2025-26 on heightened demand

Bhagwati Products Limited (BPL), which is aiming to become India’s leading and premium...

How procedural memory can cut the cost and complexity of AI agents

Want smarter insights in your inbox? Sign up for our weekly newsletters to...

After Their Son’s Suicide, His Parents Were Horrified to Find His Conversations With ChatGPT

Content warning: this story includes discussion of self-harm and suicide. If you are...

Anthropic launches Claude for Chrome in limited beta, but prompt injection attacks remain a major concern

Want smarter insights in your inbox? Sign up for our weekly newsletters to...

More like this

Bhagwati Products plans new manufacturing facility, aims ₹15,000 crore revenue in FY 2025-26 on heightened demand

Bhagwati Products Limited (BPL), which is aiming to become India’s leading and premium...

How procedural memory can cut the cost and complexity of AI agents

Want smarter insights in your inbox? Sign up for our weekly newsletters to...

After Their Son’s Suicide, His Parents Were Horrified to Find His Conversations With ChatGPT

Content warning: this story includes discussion of self-harm and suicide. If you are...