The Association for Financial Markets published a white paper looking at how best to ensure there is transparency in AI use in capital markets, without restricting the use of the technology as it develops. The white paper outlines considerations for how to approach transparency in Artificial Intelligence (AI)/Machine Learning (ML), which is key to ensuring the safe and effective deployment of trustworthy AI/ML in capital markets.
Currently, there is a specific focus on explainability, with the suggestion that AI/ML models are either explainable or not explainable at all. While AFME acknowledges the limitations of some current explainability techniques, we believe that such a binary approach is not appropriate for categorising AI/ML, as it does not allow for ongoing developments in either models or explainability techniques.
AFME’s paper also suggests that mandating a certain level of accuracy and validity of explainability is likely to unnecessarily limit the use of the technology, by restricting the breadth and complexity of AI/ML models that can be used, and could also lead to the provision of ‘explanations’ that may be misleading and therefore counterproductive.
AFME has therefore proposed considerations for a framework built around the broader concept of transparency. This involves identification of the various stakeholders in an AI/ML project and their needs, which should then be met through a structure of (i) qualitative and quantitative assumptions and (ii) testing. Both should be articulated at the start of any AI/ML project, then monitored and adjusted as necessary throughout its lifecycle. This approach can be tailored to the risk profile of each individual AI/ML application, rather than applying ‘one size fits all’ standards.
Fiona Willis from AFME’s Technology and Operations Division said in a statement: “As AI/ML deployment continues apace within capital markets, it is natural that there should be increasing attention on how firms are ensuring that an appropriate level of governance and oversight is in place. However, it is important that we do not apply a ‘one size fits all’ approach which could limit the use and continued development of the technology.”
AFME supports a technology-neutral and principles-based approach to regulation. Given the highly regulated nature of capital markets, AFME and its members do not believe that it is necessary for regulators to design a new regulatory framework for the use of AI/ML. Instead, it is suggested that a gap analysis of existing regulations should be performed, in order to ensure that they are focused on the appropriate outcomes and that it is not unintentionally placing constraints on firms’ use and upscaling of AL/ML applications.
AFME believes that an AI/ML transparency framework is achievable within the existing rules, laws and regulations applicable to the capital markets industry and will support firms in meeting their regulatory and ethical obligations, and in deploying AI/ML to the maximum benefit for themselves and for clients.