Regulatory reporting generates huge data sets, but what happens next?

The lure of big data to support regulatory oversight of risk, collateral chains and market behavior is strong, but regulators must be ready for big data analytics. This creates a challenge for regulators and an opportunity for market participants and trade repositories in response. A guest post from UnaVista, part of the London Stock Exchange Group.

Regulators have three reasons for gathering vast amounts of market data: identifying the buildup of systemic risk, tracking collateral chains and identifying illegal activity. Each of these goals is admirable, yet experience to date shows that creating a unified, reliable data set can be problematic. Market participants and Trade Repositories are working on meeting their data reporting requirements, but can also help regulators shape the debate by building a robust infrastructure that supports smart data consumption.

When regulators look at systemic risk, they are working to build a consolidated data source to find the directionality of trading positions. The herd nature of many leveraged market players can give rise to same way positions, which can create cascading risks if markets move in the wrong direction. Using European Market Infrastructure Regulation(EMIR) data to track short volatility positions, for example, could give regulators days or weeks advanced notice of market stress if positions are concentrated and volatility spikes, as recently occurred.

Collateral chains as a source of risk gained their greatest prominence during the Global Financial Crisis. At the time, there was no way to know how institutions were connected via exposure on financing and collateral transactions. The web of trades became a Gordian knot, eventually freezing the markets. Knowing collateral velocity – the re-use of securities in the market from one participant to another – reveals much about the creation of credit in the banking system, which in turn increases understanding of potential systemic risk flashpoints. A general sense that collateral velocity has gone from 4X in 2010 to between 1.5X and 2X today means that balance sheet constraints are working, but a reverse direction is also possible as regulators create more flexibility for bank capital utilisation. The Securities Financing Transaction Regulations (SFTR) includes provisions for identifying collateral posted and collateral rehypothecation, which will help regulators monitor the situation.

A large source of market data can provide regulators an opportunity to spot illegal activity, such as LIBOR or FX collusions. Regulators can use large data sets to track bank swap positions, where that institution posts LIBOR or trades repo, and what the P&L impact may be from a move up or down. The data could provide an early detection for illegal activity. In asset management, this is one objective of the Markets in Financial Instruments Regulation (MiFIR) and, given recent scandals, identifying illegal behavior may have become the top priority of regulatory reporting.

Challenges in creating a reliable data set

The end goal of large scale data collection for analysing systemic risk, collateral chains and market behavior requires the ability to crunch the numbers. Obtaining a clean data set is no easy task as trades with similar economics and risks may appear as different structures. For example, a securities financing transaction may start as a securities lending trade and be offset as a repo or a Total Return Swap (TRS). Regulators must be able to understand and digest the data, tracking risk in a holistic, cross product way.

Where a trade is booked also makes a difference. International financial institutions have a choice of which entity or branch they execute a trade. If one jurisdiction is included in a reporting scheme but another is not, the picture is incomplete and conclusions potentially flawed. Back to back trades made by a regional affiliate then transferred to a main office for risk management can also create misdirection. The full breadth of transactions with similar economics should be captured across jurisdictions to avoid gaps in the data.

Further complicating matters is that any regional data set is a sample of actual market activity. Often that fact is forgotten and regulatory reporting is assumed to be a complete data set, a fallacy known as “N equals all”. This creates a requirement to fit the data into a model, which by definition views the market in terms of probability. If market correlation is the sole determining factor for analysis, as would be the case in a complete data set, then mistakes will be made.

Matching rates with LEIs and UTIs

Regulators will look to the data that TRs have collected to feed their analysis engines. Since there can be multiple TRs per market and even more globally for certain sets of similar risks, there must be coordination to create a “golden data source” for regulators. To that end, the TRs must have output that conforms to the data structure needs of the regulators, including industry-wide accepted Legal Entity Identifiers (LEI) and Unique Transaction Identifiers (UTIs). This alphabet soup of reporting requirements must also include a reconciliation interface to the regulator’s analytics to facilitate trade matching and exception processing.

There is no guarantee that trades will consistently match without a valid and reconciled global LEI, along with UTI databases that are updated and reconciled. And it may be a thorny issue determining where to place responsibility for maintaining and reconciling LEIs and UTIs. TRs are leading the charge, in coordination with regulators, technology vendors and market participants, because without good identifiers TRs will not report robust data. Industry associations and regulatory think tanks have seen the derivatives trade and counterparty identifier space as ripe for development: ISDA developed UTIPrefix.org, a free, non-proprietary tool to generate UTI prefixes, and the Financial Stability Board’s “Governance arrangements for the unique transaction identifier (UTI), Conclusions and implementation plan” created a framework for understanding a complex situation. Regulators themselves are working early to encourage matching rates. Learning from the lessons of EMIR, especially the difficulty in launching LEIs and UTIs, will be crucial if low matching rates are to be avoided for SFTR and other regulatory initiatives.

Planning for the best outcome

While matching rates from EMIR have been low, SFTR offers another opportunity to create a better reporting framework. Regulators are currently standing by their request for over 150 different data fields, including 99 fields for loan and collateral data, 16 fields covering re-use, 20 fields on margin data, and 18 fields allocated to counterparty data. The data cannot be sourced from a single system since the requirements go well beyond what any one institution captures; vendors and utilities will need to supplement data fields for reporting. Reconciliation of the data may create additional stresses on internal systems. Regulators will want to know how all the information is related; this is a complex task when no single person or group holds cross-reporting responsibilities that extend to multiple areas. Public success of the SFTR will be measured by the accuracy and timeliness of the data provided, but the real test is if regulators can process and learn what the data is telling them.

Delivering on regulatory requirements starts at the firm level but is rarely solved without the help of multiple partners. We are witnessing the growth of a new ecosystem of solution providers to support regulatory reporting across a range of financial products. At UnaVista, we have partnered with IHS Markit and Pirum and expect that other firms will join that grouping, and we may partner with others as well.

What Trade Repositories can do to help

Trade Repositories will support regulatory directives – that is their central market requirement and they must be authorised by regulatory authorities to deliver. While managing data is the foremost objective, Trade Tepositories can provide market participants additional services that support both their data reporting and broader data management objectives. These services are part of a new wave of big data analytics that serve not just regulators but market participants themselves. This is part of a large new trend of deriving value from the vast amounts of data that firms are collecting and submitting. As part of this effort, UnaVista has created the SFTR Accelerator programme that includes a gap analysis tool that allows firms to get their data content correct in time for reporting. The Accelerator tool we provide can be used by clients as a project tool to help assess their data readiness as they can use the tool to validate their submissions against the current regulation fields and data validations, and track progress to completion.

By capturing large data sets and understanding what regulators need, UnaVista Trade Repository has built in validation rules in regulation and additional layers of verification that market participants can use to assess their data quality. We also deliver peer to peer feedback, which helps firms understand how well their data quality lines up against competitors; this can be very valuable when speaking with regulators and internal compliance teams. These tools are direct drivers to reducing operational and regulatory risk across all asset classes.

Market participants can get ahead of the regulatory reporting process by planning ahead. EMIR taught an important lesson: waiting is not a good strategy for implementation. Prepping internal systems to use LEIs and UTIs, and interface with trade repositories cannot be left to the last moment. Participating in industry associations to best understand the experiences of others, both the good and the bad, will ease the path toward compliance. Firms will rely on a mix of internal and external vendors, along with industry utilities and trading platforms, to find their preferred solutions. Regulators are relying on market participants to deliver robust data and on Trade Repositories to organise the data into a usable form. This can only be accomplished with broad market participation.

Andrea Ferrise is the Compliance Officer for UnaVista at the London Stock Exchange. He is a subject matter expert responsible for both assisting clients with the reporting of OTC and ETD derivatives and ensuring UnaVista Trade Repository complies with EMIR regulation. In addition, Andrea holds the primary responsibility of helping UnaVista become an approved SFTR Trade Repository and supporting clients to cope with a continuously changing regulatory landscape.

Prior to joining the LSEG, Andrea was an ESMA officer on the TR policy and Data reporting team where he was engaged in the drafting of the Technical Standards and Regulatory fees under SFTR. Andrea holds a Master’s Degree in economics from Solvay Business School in Brussels.

Related Posts

Previous Post
NY Fed report finds large post crisis regulation impacts on bank intermediated arbitrage
Next Post
UK FinMin unveils Brexit transition plans for financial services

Fill out this field
Fill out this field
Please enter a valid email address.

X

Reset password

Create an account