Applying deep neural networks to derivative pricing problems

In an academic paper, Blanka Horvath from the Department of Mathematics at ETH Zurich, Aitor Muguruza from Imperial College London and Mehdi Tomas from Ecole Polytechnique present a consistent neural network based calibration method for a number of volatility models — including the rough volatility family — that performs the calibration task within a few milliseconds for the full implied volatility surface.

The aim of neural networks in this work is an off-line approximation of complex pricing functions, which are difficult to represent or time-consuming to evaluate by other means. The research team highlights how this perspective opens new horizons for quantitative modelling: the calibration bottleneck posed by a slow pricing of derivative contracts is lifted.

This brings several model families (such as rough volatility models) within the scope of applicability in industry practice. As customary for machine learning, the form in which information from available data is extracted and stored is crucial for network performance.

With this in mind, the researchers discuss how the approach addresses the usual challenges of machine learning solutions in a financial context: availability of training data; interpretability of results for regulators; control over generalization errors). We present specific architectures for price approximation and calibration and optimize these with respect different objectives regarding accuracy, speed and robustness.

They also find that including the intermediate step of learning pricing functions of (classical or rough) models before calibration significantly improves network performance compared to direct calibration to data.

Referencing a talk about the research, think tank, the Thalesians, wrote: “Some of these insights are highly nontrivial, including the effect of precision on the convergence of the neural network optimizer. While it is customary in quantitative finance to use double precision, many GPU applications (including those used for the calibration of neural networks) by default use floats.”

Access the academic paper

Related Posts

Previous Post
ICMA: where is my blockchain bond?
Next Post
US and European repo market structure for H1 2019 in five charts (Premium)

Fill out this field
Fill out this field
Please enter a valid email address.

X

Reset password

Create an account