What German regulator BaFin thinks about regulating big data and AI

Germany’s financial watchdog BaFin received numerous responses regarding its report “Big data meets artificial intelligence – Challenges and implications for the supervision and regulation of financial services”. Industry associations, individual institutions, national and international authorities and representatives from academia contributed to the consultation. This article provides an overview of the responses and includes an interview with BaFin President Felix Hufeld, offering an initial analysis of the results.

New providers are emerging in the financial sector as a result of BDAI-driven innovations. This could intensify the disaggregation of the value chain, particularly if existing businesses cooperate with new specialized providers. BDAI (big data artificial intelligence) is a phenomenon that could also give rise to new types of business models and market participants that are not yet adequately covered by the current regulatory framework. It is vital that such cases are identified and that the range of providers and companies to be supervised is expanded accordingly.

The systemic importance of providers with data-driven business models could rapidly grow due to their scalability and reach. However, systemic importance may also arise if central data or platform providers make identical or very similar structures for processes or algorithms available to a wide range of market participants. Systemic importance could also emerge as a structure from the interaction between various market players. This raises the question of whether and how the banking- and insurance-based concept of systemic importance needs to be redefined in order to keep pace with new business models and market structures.

Closely interconnected systems are susceptible to the rapid and uncontrolled spread of disruptions – not only on trading venues but also elsewhere. This raises the question of whether the technological safeguards which are already widespread on trading venues would also be necessary and could be usefully applied outside of trading venues in the age of BDAI. For example, decoupling mechanisms for data streams could be considered, as the significance of data supplies is increasing considerably as a result of BDAI. Technological safeguards may be necessary but only if there is a risk of significant losses.

BDAI will create additional opportunities for automating standard market processes. When designing (partially) automated processes, it is important to ensure that they are embedded in an effective, appropriate and proper business organization. Responsibility remains with the senior management of the supervised institution, even in the case of automated processes. Appropriate documentation is required to ensure this. It may also be necessary to extend established governance concepts, such as the “four eyes” principle, and to apply these to automated processes.

There are two levels of explainability: the model and the individual decision. It is the responsibility of supervised institutions to ensure the explainability and traceability of BDAI-based decisions. At least some insight can be gained into how models work and the reasons behind decisions, even in the case of highly complex models, and there is no need to categoriZe models as black boxes. For this reason, supervisory authorities will not accept any models presented as an unexplainable black box. Due to the complexity of the applications, it should be considered whether process results, in addition to documentation requirements, should also be examined in the future.

Any use of BDAI in models that are subject to supervisory approval would also have to be approved by supervisory authorities on a case-by-case basis. Beyond the individual case, it is to be examined whether existing legal (minimum) requirements for the data used and model transparency are sufficient in relation to BDAI or whether additional requirements would be necessary. In the case of dynamic BDAI models, it is necessary to examine which general modifications constitute a model change in the supervisory sense, which banks or insurers, e.g. in line with the model change guidelines for insurance companies, would have to report to supervisors and may have to secure approval for.

BDAI can improve the detection rate of anomalies and patterns, and thus increase the efficiency and effectiveness of compliance processes, such as money laundering detection or fraud prevention. If BDAI technology were to make the detection of money laundering far more effective, criminals could potentially turn to companies that are less advanced in this area. It is therefore necessary to monitor whether this will materialise. The results of algorithms must be sufficiently clear to ensure that they can be checked by supervisory authorities and used by the competent authorities (e.g. law enforcement agencies). Minimum requirements may need to be developed for this purpose from a regulatory and supervisory point of view.

Using BDAI can increase the risk of discrimination: algorithms could be based on features for which differentiation is prohibited by law. Approximations are still possible, even if unauthorised features are not used, as there is a lot of other data available allowing conclusions to be drawn. There is also the risk that differentiations are made on the basis of false assumptions or false conclusions drawn by algorithms, and that consumers may in fact be discriminated against – even if this is unintentional. When programming algorithms and evaluating results, providers must take special care to ensure that individual consumers are not discriminated against. This raises the question as to what monitoring and transparency mechanisms could be useful in this context.

Linking different types of data (sources) could be a particularly promising way to improve risk assessments in the financial sector. In future, customers could therefore be confronted with situations where they have to give access to more (new) data (sources) – such as social media accounts. It is therefore possible that future data requirements will go far beyond current requirements and that the price of a financial service will depend on whether this data is made available. In addition, BDAI selection mechanisms could inordinately hamper access for individual consumers to certain financial services. The situation can be particularly precarious if consumers are disadvantaged by having access to a narrower range of products but are unaware that this is due to their personal data. This raises the question of how access to (affordable) financial services can be maintained if customers cannot or do not want to grant access to (new) sources of data to a significant extent.

The potential of BDAI can only be exploited for financial services if it is possible to gain and maintain the trust of consumers by ensuring that their data is used as desired and in accordance with the law. Providers should particularly ensure that consumers are able to make sovereign decisions by ensuring that consumers are adequately informed about the potential reach and consequences of the use of their data and that they are given reliable options to control how their data is used and have genuine freedom of choice. It is not enough to provide consumers with highly complicated terms and conditions, which are usually accepted without being read. In particular, technical (data protection) measures (e.g. privacy-preserving data mining) or a “privacy by design” concept could also bolster consumer trust in BDAI innovations. Data sovereignty is regarded as a key issue.

Interview with Felix Hufeld, President of BaFin

Q: The respondents to BaFin’s consultation point out that, by making intelligent use of data, providers of search engines, social networks and online (comparison) platforms are advancing into areas that used to be the sole preserve of specialised and often regulated providers. What is your opinion on this?

Felix Hufeld: If these and other tech or platform-based companies were to offer regulated financial services, they would, of course, have to meet the same supervisory and regulatory requirements as all the other institutions. But even if they do not provide any regulated financial services themselves, the respondents rightly pointed out that these companies could become essential for the functioning of the entire industry – e.g. as providers of cloud services, algorithms, data, and evaluations such as scores and ratings. These have been around for a while, but once BDAI and automated interfaces come into play, the impact of these services on the financial market could be even more immediate.

The respondents put forward a number of interesting ideas on how to address the growing importance of these providers for the financial market from a supervisory and regulatory point of view. One suggestion was that outsourcing companies should be subject to minimum technical standards similar to those for regulated banks. Another idea was a digital signature that lists all the companies involved in the development or provision of a product. This, it is argued, would help customers to understand more clearly who is behind a product or service. Above all, accountability would not lie solely with the financial services provider involved but would be extended to other companies along the entire value chain. In addition, a back-up party could also be agreed upon for every element within a product’s value chain, which would be obliged to step in if one of the companies involved cannot provide the expected service. Tech solutions, such as blockchain-based smart contracts, could play a part here.

All these considerations confirm the proposition we put forward in our BDAI report, which is that, as regulators and supervisors, we will no longer only look at individual companies but will increasingly consider value chains that are spread across multiple companies. Supervisors would then also focus on the activities of companies that are not part of the regulated financial sector but can still have an impact on customer trust and the integrity of the financial market as such. I am not saying that BaFin should supervise bigtech companies that do not provide financial services as a whole. What is important to me are some of the activities and conduct of such companies in order to establish a direct supervisory mandate in this respect.

Q: Let’s continue with value chains. Value can be created by linking data from various sources – for instance at key customer interfaces on platforms. How could the growing importance of (financial) data be taken into account?

FH: We agree with most of the respondents on this matter. The growing importance of data in the age of digitalization is also based on the fact that data from different sources is combined and compared, allowing new information to be obtained. By connecting data on financial transactions and data on the behavior of consumers, it is possible to have a fairly clear idea of the amount of money that customers are willing and able to pay for products and services. In addition, the emergence of platform-based business models is breaking down information silos, and information from one area can have an impact on other areas. It is only logical that the authorities supervising different areas of economic life collaborate more closely and share information with each other – provided that this is permitted by law, of course. As the use of BDAI is increasing, data protection authorities and competition watchdogs are particularly important for us as financial supervisors. Our supervisory counterparts abroad are not to be forgotten either.

Of course, market participants see great economic potential in digging up treasure troves of data. But with data mining – as with any conventional process of prospecting, mining and utilizing resources – we must keep a watchful eye on the associated risks. For us supervisors, it is crucial that consumers and providers are confident that the financial market is stable and that things are being done as they should be. We also need to consider what negative spillover effects there can be when financial data is used in value creation processes outside the financial market – even if formal consent has been given in accordance with the law. Social achievements, such as the protection of privacy and informational self-determination, should not be undermined under the guise of innovation – e.g. by obtaining people’s consent to share their data by giving them the impression that there is no alternative. Not everything that is technically possible, innovative and economically sensible in the short term is all the above if looked at from a holistic and long-term perspective.

Q: Let’s take another look at the financial market. Do you consider that the use of data is a key issue that could become more relevant for the financial market as a result of BDAI?

FH: The argument that data is necessary for assessing risk could be used to justify the need to gather virtually all data in the context of providing financial services – although such practices have not been observed on the German financial market to date. The responses to our consultation have made one thing clear to us: we have to increasingly ask ourselves which data is really needed for an appropriate assessment of risks – in other words, for a suitable differentiation as required by supervisors. Insurers that took part in our consultation have, in cooperation with data protection authorities, already pledged to minimize the use of data in a code of conduct that has been published. But what I find interesting in this context is the fundamental question of where the limits of data collection and analysis should be in the case of BDAI. At what point does a marginal improvement in risk assessment justify the collection of more data? Which data can we categorize as offering real long-term and material advantages while ensuring a balance between the information that needs to be obtained and other objectives such as data minimization (Datensparsamkeit)? I think we need to have a broad dialogue with all those concerned – but we also need to ask ourselves, as a society, where we want red lines to be drawn in the brave new world of data.

Let’s now turn to responsibility in the context of self-learning decision support systems. In the BDAI report, BaFin pointed out that humans must always bear ultimate responsibility and that this responsibility cannot be passed on to computers. This also applies to financial supervision, according to the respondents. What is your view?

FH: The Fraunhofer Institute for Intelligent Analysis and Information Systems was one of the institutions that assisted us with our BDAI report. Fraunhofer stressed that the successes of machine learning have so far only been observed in highly specific applications and that approaches for the general simulation of human intelligence are still not foreseeable. We can therefore expect to rely on the interplay between artificial and human intelligence in the foreseeable future. Responsibility will and must therefore continue to rest with humans in the area of financial supervision, too. Financial supervision is and will remain a flexible process that focuses on the assessment of complex issues. But artificial intelligence can support us as supervisors and help us prepare decisions and establish better and quicker processes. In highly data-driven areas – such as market abuse analyses or, perhaps in the future, money laundering prevention – supervisors will not be able to do without BDAI.

In the responses to the consultation, there were calls for manual or human intervention in decision support systems based on artificial intelligence. However, imposing algorithm explainability as a requirement is seen as unreasonable restrictions. What is your opinion?

FH: In my opinion, blind trust in technology is dangerous. Humans must be able to intervene and it must be possible to switch off automated processes. As mentioned earlier, humans, not machines, bear ultimate responsibility. We need to bear this in mind when evaluating new processes.

As far as the explainability of AI systems is concerned, we stressed in our report that a distinction should be made between explainability and transparency. Transparency means that the behavior of the system as a whole can be understood in its entirety. Fraunhofer pointed out that this is often impossible to achieve as many models are inevitably highly complex. On the other hand, explainability is a criterion that is far easier to fulfill from a technical point of view, according to Fraunhofer, as it focuses on identifying key influencing factors behind a specific decision reached by a system.

Respondents to our consultation also hold the view that we as supervisors are confronted with the question of whether and how BDAI models can be examined. Extended requirements for business-critical process areas were suggested, including the use of code review processes, simulation and penetration tests and the assessment of sample profiles. Respondents also called on BaFin to lay down specific requirements for documentation and the explainability of BDAI applications. But do not expect us supervisors to shoot from the hip. We should first deepen the dialogue with academia and industry and make sure that industry best practices are developed. Once we know if and how they work, we can, as a next step, consider to what extent we will derive standards from them.

What are the next steps for BaFin now that the consultation has closed?
We have started evaluating the responses, which we have summarized in this article. A number of subject areas are becoming apparent and we intend to prioritize and deal with these based on how urgent and significant they are. To address all the aspects of this complex topic, we need to work even more closely with industry, academia and other authorities in some areas. This is something we intend to do in the near future. But we have already achieved something with our BDAI report and the consultation: we have looked into the burning questions surrounding this topic – and placed them in the public eye.

Read the full article

Related Posts

Previous Post
Transparency requirements of EU Securitisation Regulation to be incorporated into Eurosystem collateral framework
Next Post
Buy-side trading dynamics on FICC sponsored repo

Fill out this field
Fill out this field
Please enter a valid email address.

X

Reset password

Create an account