The recent news that the FCA is partnering with the Alan Turing Institute to explore the explainability of AI in financial services is a welcome development. While financial institutions are increasingly using AI to improve efficiency and productivity, there is little transparency in how so-called neural networks make decisions, putting organisations at risk of inaccurate and even fraudulent decisions. Even worse, it is far more difficult for financial institutions to audit bad decisions by AIs than it is to audit human decisions.
In exploring how to make AI more transparent and explainable, the FCA needs to address a number of issues in reducing the threat of unaccountable AI decision-making across the financial sector