A regular briefing for the alternative asset management industry.
Self-evidently, artificial intelligence (AI) and machine learning (ML) offer new opportunities for alternative asset managers – with many firms already using them to help with a range of key tasks, including deal analytics, due diligence and portfolio company monitoring. Proprietary technologies sit alongside ready-made tools like Microsoft Co-pilot to increase efficiency and effectiveness. But deployment of AI also brings challenges – and therefore the attention of regulators. In the UK, the financial regulators have fired the starting gun, setting out a strategic approach to managing AI integration in financial services.
The UK's Financial Conduct Authority (FCA) issued an AI update in April which explores the evolution of AI in financial markets and sets out the FCA's approach. (Our detailed note on the FCA paper is available here.)
In essence, the FCA is not proposing AI-specific rules (at least, not for now), but emphasises that a range of existing FCA rules are all potentially engaged when a firm is using and/or procuring AI. There may not be AI-specific FCA regulation, but there is now FCA regulation of AI.
The FCA's report notes the rise of AI technologies, as increasingly sophisticated algorithmic models permeate all sectors of the economy (including law). It welcomes AI's ability to enhance decision-making processes and generate significant cost savings. The FCA suggests AI and ML could fundamentally reshape finance, turning data into a significant asset and driving a shift towards more personalised services.
However, the FCA recognises AI as a powerful double-edged sword. To counter risks, the FCA is explicit that firms need to maintain appropriate governance frameworks and controls. Risks, primarily relating to opaque decision-making processes and data bias, could undermine trust in AI systems and raise ethical questions. AI's heavy reliance on data also increases security hazards, with potential breaches carrying heavy consequences in financial markets.
Moreover, the FCA notes the need for human understanding and oversight of AI models, referring to the need for appropriate transparency. While such tools can outperform more traditional models, their inability to explain their decision-making process creates a trade-off between performance and explainability.
Many firms use third-party vendors for AI and machine learning applications. Although this may aid innovation, the FCA emphasises the need for firms to possess sufficient expertise to ensure comprehensive understanding and risk management, irrespective of whether the technology is developed in-house or obtained externally. As AI becomes ever more sophisticated, an important question is whether some third-party supply arrangements could become critical or important outsourcings, triggering specific FCA obligations.
So, while AI and ML hold the promise to revolutionise financial services, their full-scale, unchecked deployment might also result in unforeseen, possibly damaging, consequences. Therefore, the FCA report is a call to arms for the industry and regulators to grapple with AI's complexities and challenges to ensure that AI serves to benefit everyone in the financial ecosystem.
"The FCA clearly expects all regulated firms to ensure that its use of AI does not compromise a firm's ability to demonstrate compliance with its rules, including in relation to any services that are procured externally."