Pinsent Masons: What the EU AI Act means for financial services

Financial services firms providing or using artificial intelligence (AI) tools face stricter regulation around their activities under recent proposals for new regulation published by the European Commission. The focus of the draft AI Act is the creation of harmonized rules for a proportionate, risk-based approach to AI in Europe, but it will impact use and development of AI systems globally, including within the financial services sector.

Application to financial services

Providers

A financial institution procuring the development of an AI system or tool with a view to placing it on the market or putting it into service under its own name or trade mark, will be considered a ‘provider’ of AI and would be required to comply with the applicable regulations. This would also be the case where a financial institution or provider develops its own AI system.

Users

As well as complying with requirements as a provider of AI, financial institutions using, rather than developing, high risk AI systems would also be required to adhere to the obligations placed on users of AI. This includes ensuring systems are used in accordance with the instructions of use accompanying the systems, implementing human oversight measures indicated by the provider, and ensuring that input data, where the user exercises control, is relevant to the intended purpose of the high risk AI system.

For credit institutions, obligations in respect of monitoring the operation of high risk AI and keeping logs automatically generated by a high risk AI system would be satisfied and/or governed by complying with the rules on internal governance arrangements, processes and mechanisms under the EU’s 2013 directive on access to the activity of credit institutions and the prudential supervision of credit institutions and investment firms. However, the directive only sets out very general requirements in relation to governance arrangements that may fall within scope of the draft AI Act.

Financial institutions therefore need to consider the extent to which their existing processes would need to be adapted if more specific governance requirements are set out in the final AI Act.

Requirements for financial services

The draft AI Act sets out requirements on various uses of AI, some of which are specific to certain types of financial services. AI systems used to evaluate creditworthiness or establish credit scores, for example, would be subject to the mandatory requirements for high risk AI. Transparency requirements, including informing individuals that they are interacting with an AI system, would also apply to the use of chatbots.

A financial institution would also be required to comply with the ‘high risk’ requirements, including the need for human oversight, where it uses AI systems for:

  • recruitment purposes including advertising vacancies and screening applications; and
  • for making decisions on promotion and termination of work-related contractual relationships, task allocation and monitoring and evaluating performance and behavior.

While the requirements in the proposed new regulation are limited to certain uses of AI in financial services and would not cover all AI use in the sector, the proposals encourage organizations to voluntarily develop and adopt codes of conduct which incorporate the mandatory requirements for ‘high risk’ AI and apply such requirements to all uses of AI.

Read the full post

Related Posts

Previous Post
APAs and ARMs set up association to boost regulatory standards for data quality and auditability
Next Post
Analysis of the Fed’s new primary dealer repo clearing venue data

Fill out this field
Fill out this field
Please enter a valid email address.

X

Reset password

Create an account