A recent paper from the Bank for International Settlements provides an overview on the use of big data and machine learning in the central bank community. It leverages a survey conducted in 2020 among the members of the Irving Fischer Committee. The survey contains responses from 52 central banks from all regions of the world and examines how they define and use big data, as well as which opportunities and challenges they see.
The analysis highlights four main insights:
- central banks define big data in an encompassing way that includes unstructured non-traditional as well as structured data sets;
- central banks’ interest in big data and machine learning has markedly increased over the last years: around 80% of central banks discuss the topic of big data formally within their institution, up from 30% in 2015;
- the vast majority of central banks are now conducting projects that involve big data. Institutions use big data and machine learning for economic research, in the areas of financial stability and monetary policy, as well as for suptech and regtech applications; and
- the advent of big data poses new challenges, among them data quality, legal aspects around privacy, algorithmic fairness and confidentiality, as well as budget constraints. Cooperation among public authorities could relax the constraints on collecting, storing and analyzing big data.