AI-Powered Financial Firms: Navigating the Future

This November, the UK has been at the forefront of Artificial Intelligence (“AI”) discussions, with Prime Minister Rishi Sunak hosting the world’s first Global AI Safety Summit. However, for financial firms in the UK, it can be difficult to navigate between the risks, benefits and potential regulation facing AI. With so many embracing AI, including the Financial Conduct Authority (“FCA”), it is vital for firms to remain knowledgeable about AI and how they may safely implement this technology across their business.

The UK financial sector has seen a rapid integration of AI in its workforce beyond just chatbots. Many AI companies are designing tools to streamline daily operations within financial services.

The three types of AI technologies transforming the financial industry:
  • Generative AI (GenAI) is a type of AI that generates content when prompted. Open AI’s ChatGPT is an example of this.
  • Large Language Models (LLM) is the technology behind generative AI. It is trained using large volumes of data, including those received in prompts, which means it is continually evolving.
  • Frontier AI is intelligent technology that can outperform both humans and other AI models.

These are being used across all areas of business. This year, Citigroup used GenAI to analyse 1089 pages of new US banking regulation, word by word. An estimated 87% of financial firms in the UK are using AI to assist with financial crime. The company Black Forest, for example, uses AI’s most powerful feature- analysing large amounts of data at an exceptional speed- to analyse a firm’s transactions and record potential cases of fraud. For FX traders, Alpaca Forecast can generate market price predictions based on data from markets trends, volatility and liquidity. AI models can also help investors integrate Environmental, Social, and Governance (ESG) factors into their portfolios. GreenWatch identifies the authenticity of green claims, enabling investors to make sustainable finance decisions and tackle greenwashing.

How the FCA is responding to the rise of AI in the financial sector

The FCA is all too aware of the AI phenomenon within the financial services indurty. In October, the FCA released the FS2/23 feedback statement on Artificial Intelligence and Machine Learning, accompanied by a succession of speeches outlining their perspective. They said that regulation is necessary, but it should not impede innovation: a message endorsed by the UK government. In March of this year, the government published a white paper called, “A pro-innovation approach to AI regulation”. The paper was clear that financial firms should expect sector-specific guidance from the FCA on the use of AI, instead of facing broad restrictions. FCA guidance will focus on the safe implementation of AI and consumer protection, some of which may soon become law. An AI bill, which focuses on regulating the use of AI in the workplace, is currently in early stages of development in Parliament.

Risks facing AI-powered firms

AI tools can bring many benefits to financial firms, such as improving efficiency, providing personalised advice and supporting sustainable finance. However, these tools also need to be integrated with diligence. Firms that implement AI within their activities must carefully assess and manage the risks associated with these new technologies, just as they would for a new product or service. These include cybersecurity, operational, and reputational risks, which have already led to AI failures across different countries and sectors. Earlier this year, Italy temporarily banned ChatGPT as a result of data breaches. In Australia, academics called for an investigation into KPMG after its AI tool falsely accused the firm of misconduct. Third-party AI risks are also on the rise. An estimated 55% of all AI failures originate from the providers, leaving firms open to unexpected risks. These AI failures illustrate the need for proper oversight and governance of AI tools, particularly in the financial sector, where the stakes are high.

The FCA has identified four critical areas of focus

Security – firms using data-dependent AI models are vulnerable to data breaches, cyberattacks and unauthorised access.

Control – currently, neither the companies designing these AI tools nor the firms using them are subject to strict regulation. This increases the risk of inadequate risk assessments and governance over these AI systems.

Consumer Protection – the way in which firms use AI may not be compatible with Consumer Duty. This ranges from undisclosed use of AI with clients, to compromising confidentiality.

Outsourcing AI – the FCA classifies AI providers as Critical Third Parties to financial firms. They recognise the risk of firms becoming overdependent and overconfident in their outsourced AI tools.

How firms can mitigate AI-driven risks

There are two areas for firms to watch; the FCA’s own use of AI and the EU AI Act, the world’s first AI-dedicated legislation, expected to come into force in early 2024.

  1. The FCA has addressed security risks by using synthetic data in AI models. They will take real fraud and money laundering cases, create synthetic data, and use the new set of data in fraud-detection AI models. These models can analyse synthetic data and learn to identify real-life patterns. The FCA’s risk-based method for adopting AI provides insight into how firms can safeguard against AI-driven data threats.
  2. The EU AI Act aims to create a common regulatory framework for the development and use of AI. There will be different rules for different levels of risk, to ensure that AI systems used in the EU are safe and transparent. Financial firms in the UK may have escaped scrutiny under the Act, but it is likely that future UK legislation will incorporate similar initiatives. For example, once the Act comes into force, any organisation operating within the EU will have to declare when content is produced by generative AI models such as ChatGPT. In light of the FCA’s Consumer Duty and increasing risks to market integrity, UK firms should prepare for similar regulatory expectations by implementing AI disclosures. A proactive approach will prepare firms for the arrival of UK regulation and mitigate the diverse risks associated with AI.

Artificial Intelligence has the potential to transform the way retail and institutional investors manage their finances, and is quickly becoming pivotal in shaping the global economy. Used correctly, it can improve efficiency, revenue, and customer satisfaction, all at relatively low costs. However, AI also poses many challenges and risks. From cybersecurity to consumer protection, firms that overlook AI-driven failures may face serious losses. The key is to build a secure digital infrastructure. This is a pivotal point in time where a proactive, risk-based approach to AI-powered initiatives can be the difference between scandal and innovation, failure or success.

How Complyport Can Help

At Complyport, our team can assist you by reviewing how you use AI tools, ensuring they meet the Consumer Duty and SM&CR standards. We can also conduct risk assessments, to help you prepare for upcoming guidance and regulation. We will provide tailored support to help you build a strong and secure digital foundation for your AI-powered operations.

Get In Touch Now

Contact us today at jan.hagen@complyport.co.uk to learn how we can help you navigate AI compliance and stay ahead of the curve.

About Complyport

Complyport is a market-leading consulting firm supporting the UK financial services industry for over 22 years. We specialise in providing Governance, Risk and Compliance services to support the regulated financial services industry to raise standards and thrive.

Complyport can assist with the preparation of a GAP analysis and impact assessment on the investment firm’s capital adequacy and risk management framework of the Company under the regulatory framework.

We specialise in supporting the UK financial services industry with compliance guidance, advice and best practice.

  • Financial Crime support and Forensics
  • Compliance managed services and resourcing compliance personnel
  • Skilled Person Reviews and Regulatory Investigation
  • Prudential support, IFPR, ICARA and financial resilience advice
  • Consumer Duty implementation advice
  • Operational resilience & Cybersecurity advice
  • Financial Promotions guidance, support, and management software solutions
  • CASS advice and protections of client assets
  • Comprehensive compliance work-flow management software

Contact Thomas Salmon in our Regulatory Solutions team via email at: jan.hagen@complyport.co.uk to book a free consultation.

Facebook
Twitter
LinkedIn
COntact us for assistance

Please fill our free consultation form and a member of our team will get in contact with you.