IOSCO publishes AI/ML guidance for intermediaries and asset managers

The Board of the International Organization of Securities Commissions (IOSCO) published guidance to help its members regulate and supervise the use of artificial intelligence (AI) and machine learning (ML) by market intermediaries and asset managers.

The use of AI and ML may benefit market intermediaries, asset managers and investors by increasing the efficiency of existing processes, reducing the cost of investment services and freeing up resources for other activities. However, it may also create or amplify risks, potentially undermining financial market efficiency and harming consumers and other market participants.

Moreover, market intermediaries and asset managers’ use of AI and ML is growing, as their understanding of the technology evolves. The IOSCO report describes how market intermediaries and asset managers currently use AI and ML to reduce costs and increase efficiency. It notes that the rise in the use of electronic trading platforms and the increasing availability of data have led firms to progressively use AI and ML in their trading and advisory activities, and risk management and compliance functions.

Consequently, regulators are focusing on the use and control of AI and ML in financial markets to mitigate the potential risks and prevent consumer harm. In 2019, the IOSCO Board identified AI and ML as a priority.

The IOSCO guidance consists of six measures that seek to ensure that market intermediaries and asset managers have:

  • appropriate governance, controls and oversight frameworks over the development, testing, use and performance monitoring of AI and ML;
  • staff with adequate knowledge, skills and experience to implement, oversee, and challenge the outcomes of the AI and ML;
  • robust, consistent and clearly defined development and testing processes to enable firms to identify potential issues prior to full deployment of AI and ML; and
  • appropriate transparency and disclosures to their investors, regulators and other relevant stakeholders.

The six measures are that regulators should:

  • consider requiring firms to have designated senior management responsible for the oversight of the development, testing, deployment, monitoring and controls of AI and ML. This includes a documented internal governance framework, with clear lines of accountability. Senior Management should designate an appropriately senior individual (or groups of individuals), with the relevant skill set and knowledge to sign off on initial deployment and substantial updates of the technology.
  • require firms to adequately test and monitor the algorithms to validate the results of an AI and ML technique on a continuous basis. The testing should be conducted in an environment that is segregated from the live environment prior to deployment to ensure that AI and ML: (a) behave as expected in stressed and unstressed market conditions; and (b) operate in a way that complies with regulatory obligations.
  • require firms to have the adequate skills, expertise and experience to develop, test, deploy, monitor and oversee the controls over the AI and ML that the firm uses. Compliance and risk management functions should be able to understand and challenge the algorithms that are produced and conduct due diligence on any third-party provider, including on the level of knowledge, expertise and experience present.
  • require firms to understand their reliance and manage their relationship with third-party providers, including monitoring their performance and conducting oversight. To ensure adequate accountability, firms should have a clear service level agreement and contract in place clarifying the scope of the outsourced functions and the responsibility of the service provider. This agreement should contain clear performance indicators and should also clearly determine rights and remedies for poor performance.
  • consider what level of disclosure of the use of AI and ML is required by firms, including: (a) Regulators should consider requiring firms to disclose meaningful information to customers and clients around their use of AI and ML that impact client outcomes. (b) Regulators should consider what type of information they may require from firms using AI and ML to ensure they can have appropriate oversight of those firms.
  • consider requiring firms to have appropriate controls in place to ensure that the data that the performance of the AI and ML is dependent on is of sufficient quality to prevent biases and sufficiently broad for a well-founded application of AI and ML.

In addition to the guidance, the report includes two annexes that describe how regulators are addressing the challenges created by AI and ML and the guidance issued by supranational bodies in this area.

IOSCO members are encouraged to consider these measures carefully in the context of their legal and regulatory framework.
The use of AI and ML will likely increase as the technology advances, with the regulatory framework evolving in tandem to address the associated emerging risks. Going forward, IOSCO may review the report, including its definitions and guidance, to ensure it remains up to date.

Source

Related Posts

Previous Post
State Street to acquire Brown Brothers Harriman Investor Services
Next Post
ESRB recommends changing LEI business model for wider adoption

Fill out this field
Fill out this field
Please enter a valid email address.

X

Reset password

Create an account