On 22 April 2024, the Financial Conduct Authority (FCA) and the Bank of England (including the Prudential Regulation Authority (PRA), together the Bank) published updates on their approach to artificial intelligence (AI). The FCA's update is available here and the Bank's update is available here.
Firms should now expect, where they are using AI, that they will need to be able to explain their use of AI to their regulators. This will involve explaining how risks associated with the deployment of AI have been identified, assessed and managed. The acid test is: if the regulator asks, do you have a convincing narrative about your approach to managing the risks associated with AI?
What do you need to do now?
- Understand, and be prepared to explain, how AI is being used at all levels of your business. This includes suppliers and outsourced service providers – are they using AI, and do you know about it?
- Understand how your legal and regulatory obligations interact with any existing or proposed use of AI in your business.
- Ensure that client and commercial data is protected – you will need to make sure you understand how your staff are using AI, and what systems and data their AI tools can access.
- Put in place and maintain robust governance arrangements, and systems and controls, to discharge your legal and regulatory obligations in connection with AI – this might involve, for example, the implementation of an "AI policy".