Lux regulator warns that AI should be explainable by design

Institutions should implement measures to ensure explainability of their artificial intelligence and machine learning systems from the design phase, according to a recent publication from Commission de Surveillance du Secteur Financier (CSSF), Luxembourg’s financial watchdog. Even when full transparency cannot be achieved due to the intrinsic nature of the algorithm employed, one of the main weaknesses of deep learning, steps can be taken to identify and isolate in a human understandable format the main factors contributing to the final decision, the regulator added.

Source: CSSF

Other recommendations in CSSF’s white paper on artificial intelligence include topics such as: data quality, governance, and external sources, privacy; governance; skills; cultural change; bias and discrimination; accountability; auditability; safety; change management; model updating; IT operations; robustness and security; systemic risks; and external AI providers and outsourcing.

On the latter, CSSF said that institutions should evaluate the risks related to the maintenance of off-the-shelf packages and to the outsourcing of AI/ML development and maintenance activities. Adequate controls should be implemented in line with best practices and other regulatory requirements applicable to outsourcing.

Read the full whitepaper

Related Posts

Previous Post
FactSet: how data sources are shaping quant future
Next Post
MC: crypto congressman may be chief of staff, but US falling behind in regs

Fill out this field
Fill out this field
Please enter a valid email address.

X

Reset password

Create an account