It’s hard to hear a news report these days without Artificial Intelligence (AI) making an appearance. Thanks to the unprecedented rise of ChatGPT, the use of AI is being debated around every boardroom table. And financial services adopting AI is no exception. 

However, before we get into that, let’s take a couple of reality checks.

Firstly, the current ascendency of AI is far from unprecedented. The peaks – and subsequent troughs – of AI potential is a cycle that’s been repeating since the 1960s. So much so, that there’s even a term for a trough – an AI winter. More of which later.

Secondly, while AI is likely to have a profound effect on many of the financial sector’s processes, there are still areas where it will struggle to fully replace human expertise.

AI can revolutionise financial services

But hold that thought. Let’s start by looking at how AI can revolutionise the way financial institutions work. In fact, in some cases, it is already doing so.

  • Customer Service and personalisation: AI algorithms are already transforming customer service in the financial sector by powering chatbots and virtual assistants to provide round-the-clock customer support, respond to customer queries, assist with basic transactions and provide personalised recommendations. All without any human intervention.
  • Trading and investment decision-making: Where AI really excels is in number crunching. Artificial intelligence can analyse vast amounts of market data, news feeds and historical trends to identify investment opportunities and make real-time trading decisions.
  • Risk assessment and management: Yes, more number crunching. AI can help in assessing and managing risks by analysing historical data, market trends and external factors. Its algorithms can provide predictive models, helping institutions to make informed decisions about investment strategies, loan approvals and insurance underwriting.
  • Fraud detection and prevention: AI real-time analytics can detect patterns and anomalies which may indicate fraudulent activities.
  • Compliance: AI can automate compliance processes, monitoring transactions for suspicious activities and help to ensure adherence to regulations.

AI algorithms can also be used to enhance processes like credit scoring, portfolio management, anti-money laundering, cybersecurity and data analysis for business intelligence.

The risks to financial services of adopting AI

However, embracing this technology is not without its risks. There are possible ethical, privacy and systemic issues which come with it. So what are they?

  • One of the key ethical concerns is the possibility of biased decision-making. AI algorithms learn from historical data, which could reflect and perpetuate existing biases in the system, resulting in lending, insurance or recruitment which is discriminatory.
  • Integrating AI technology into more and more financial processes makes financial institutions more vulnerable to cyberattacks by creating new avenues which allow bad actors to exploit vulnerabilities in algorithms, data or infrastructure. This danger has not been lost on Klaus Schwab, the founder of the World Economic Forum, who has warned that: “A lack of cybersecurity has become a clear and immediate danger to our society worldwide.”
  • As AI algorithms become more complex and less transparent, it also becomes more difficult to understand their decision-making processes and hold them to account.
  • The interconnectedness of financial markets makes systemic risk a potential danger, with the widespread use of AI potentially amplifying risk. For example, rapid automated trading algorithms could contribute to market volatility and create flash crash events.

Of course, these risks are theoretical and should not prevent the sector from fully exploiting AI’s power. But the risks need to be addressed with new, dynamic regulatory frameworks which keep pace with technological developments. New ethical guidelines also need to be established to ensure the responsible and fair use of AI, alongside robust data protection and privacy measures to safeguard customer information.

A track record for failing to deliver

AI’s weakest link could be its unerring track record for failing to deliver on its promises.

All of the current hype around AI may well be the precursor to yet another AI winter – of which there have so far been two, from 1974 to 1980 and from 1987 to 1994.

In both cases, the wind chill that froze investment in AI development was due to its lack of commercial viability. Just like today, the promises generated by neural networks created a large amount of interest and raised expectations. Businesses and organisations invested significant sums based on these expectations. However, gradually, AI-based products failed to deliver a significant return on investment and organisations withdrew funding and stalled further research.

Many might counter-argue that today’s systems are advancing much faster and with more success than those of the past. But how far technology can go is still an open question. And plenty of experts still doubt AI’s ability to routinely pass Alan Turing’s famous test where a machine exhibits intelligent behaviour indistinguishable from that of a human.

We will still need humans

The reality is that AI is still a long way from replacing humans in financial services and there are still crucial tasks where its abilities fall short of human expertise.

But don’t take my word for it. Who better to ask about its limitations than AI itself? I asked ChatGPT to admit its own shortcomings when used in financial services. So here they are, in its own words:

  • Complex judgment calls: “Although I can analyse vast amounts of data, when it comes to making judgement calls which require subjective analysis, contextual understanding, nuanced interpretation or ethical considerations, human judgement is still more valuable.”
  • Relationship building: “I can automate some customer interactions, but I lack the emotional intelligence, interpersonal and trust-building skills that human advisors possess and will struggle to authentically replicate a genuinely personal touch.”
  • Creative problem solving: “Because I primarily learn from existing data and patterns, true creativity, originality and innovative problem-solving are beyond my capabilities – so coming up with novel solutions, identifying new market opportunities or adapting to unforeseen challenges still needs humans. Furthermore, financial markets can be volatile, and unexpected events can disrupt patterns and introduce new risks. Human adaptability, critical thinking and the ability to respond to novel situations will remain crucial in such circumstances.”
  • Regulatory decision-making: “While I can assist in automating compliance checks and flagging potential issues, I cannot fully replace human judgement when it comes to navigating complex regulatory frameworks, interpret ambiguous regulations or make ethical decisions which require a deep understanding of legal nuances.”

A transformative force

There is no doubt that AI is emerging as a transformative force in the UK finance industry, bringing both opportunities, efficiencies and cost savings.

However, despite the hype, it is by no means the finished article and financial services adopting AI brings risks that demand both careful consideration and the creation of new regulations.

That said, if this spate of AI technologies can realise its potential, then responsible and strategic adoption will unlock significant benefits for financial institutions and their customers.

By Ben Hollom

Read more blogs and news from bClear here: