IFC: use of big data sources and applications at central banks

Central bank big data-related work covers a variety of areas, including monetary policy and financial stability as well as research and the production of official statistics. However, in contrast to the rapid pace of innovation seen in the private sector, big data applications supporting central banks’ operational work had initially been limited. This reflects a number of constraints, such as a lack of adequate resources as well as the intrinsic challenges associated with using big data sources to support public policy.

Looking ahead, will central banks catch up and radically transform the way they operate in order to fully reap the benefits of the information revolution? Or will their use of big data sources and applications progress only gradually due to the inherent specificities of their mandates and processes? To shed light on these issues, in 2020 the IFC organized a dedicated survey on central banks’ use of and interest in big data, updating a previous one conducted five years earlier.

The survey focused on the following key questions: What constitutes big data for central banks, and how strong is central banks’ interest in it? Have central banks been increasing their use of big data and, if so, what were the main applications developed? And finally, which constraints are central banks facing today and how can they be overcome? Almost two-thirds of the 92 IFC institutional members answered the survey.

The survey’s main conclusions are the following:

  • Central banks have a comprehensive view of big data, which can comprise very different types of data sets. First and foremost, it includes the large “non-traditional” (or unstructured) data often characterized by high volume, velocity and variety and that must be processed using innovative technologies. But for two-thirds of respondents, big data also includes large “traditional” (ie well structured) data sets that are often “organic”, in the sense that they are collected as a by-product of commercial (eg payment transactions), financial (eg tick-by-tick price quotes observed in financial markets) and administrative (eg files collected by public institutions) activities – these data are often referred to as “financial big data”.
  • Central banks are increasingly using big data. Around 80% of the responding central banks now use big data regularly; in contrast, only one-third of 2015 respondents had indicated they were using any big data sources. Moreover, interest in the topic of big data at the senior policy level is currently rated “very important” in more than 60% of cases, compared with less than 10% in 2015. Interest in big data is especially strong among advanced economies (AEs) and is catching up in a significant number of emerging market economies (EMEs).
  • The range of big data sources exploited by central banks is diverse. A key source for the private sector is the “internet of things”, with for instance the applications developed by many central banks to scrape online portals for information in numerical (eg prices of goods sold on the web) or textual format (eg messages posted on social media). Yet another important source of information is text from printed materials processed using digital techniques. Last but not least, central banks are increasingly using financial big data sets collected in a more “traditional” way, such as balance sheet information available in credit registries, loan-by-loan and security-by-security databases, derivatives trades reported to trade repositories (TRs), and payment transactions.
  • Big data is effectively used to support central bank policies. As regards central banks’ monetary policy and financial stability mandates, newly available databases and techniques are increasingly mobilized to support economic analyses and nowcasting/forecasting exercises, construct real-time market signals and develop sentiment indicators derived from semi-structured data. This has proved particularly useful in times of heightened uncertainty or economic upheaval, as observed during the Covid-19 pandemic. A majority of central banks also report using big data for micro-level supervision and regulation (suptech and regtech), with an increasing focus on consumer protection; for instance, to assess misconduct, detect fraudulent transactions or combat money laundering.
  • The survey also underscored the need for adequate IT infrastructure and human capital. Many central banks have undertaken important initiatives to develop big data platforms so as to facilitate the storage and processing of very large and complex data sets. But progress has varied, reflecting the high cost of such investments and the need to trade-off various factors when pursuing these initiatives. Additionally, central banks need to hire and train staff, which is difficult due to the limited supply of adequately skilled candidates (eg data scientists).
  • Apart from IT aspects, there are many other challenges that central banks face. These include the legal basis for using private information and the protection, ethics and privacy concerns this entails, and the “fairness” and accuracy of algorithms trained on preclassified and/or unrepresentative data sets. Data quality issues are also significant, since much of the new big data collected as a by-product of economic or social activities needs to be curated before proper statistical analysis can be conducted. This stands in contrast to traditional sources of official statistics that are designed for a specific purpose, eg surveys and censuses.
  • Moreover, a key issue is to ensure that predictions based on big data are not only accurate but also “interpretable” and representative, as to carry out evidence-based policy central banks need to identify specific explanatory causes or factors. Furthermore, transparency regarding the information produced by big data providers is essential to ensuring that its quality can be checked and that public decisions can be made on a sound, clearly communicated basis. Lastly, there are important legal constraints that reduce central banks’ leeway when using private and confidential data.
  • Cooperation could facilitate central banks’ use of big data, in particular through collecting and showcasing successful projects and facilitating the sharing of experience, for example to avoid repeating others’ mistakes when setting up an IT infrastructure, or by pooling resources together. In particular, developing technical discussions between institutions is seen as a powerful way to build the necessary skillset among staff and develop relevant IT tools and algorithms that are best suited to central banks’ (idiosyncratic) needs.
  • International financial institutions can help foster such cooperation. For instance, they can help develop in-house big data knowledge, reducing central banks’ reliance on big data services providers, which can be expensive and entail significant legal and operational risks. They can also facilitate innovation by promoting technological solutions and initiatives to enhance the global statistical infrastructure. In addition, they can make their resources available internationally or develop joint cloud computing capabilities to reduce operational risk arising from dependence on specific providers in a highly concentrated market.

Read the full survey

Related Posts

Previous Post
ISO picks Etrading as registration authority for new digital token standard
Next Post
Eurex to expand Total Return Futures in cooperation with FTSE

Fill out this field
Fill out this field
Please enter a valid email address.

X

Reset password

Create an account