JFDS: deep reinforcement learning for option replication and hedging

In a recent academic paper published by the Journal of Financial Data Science, researchers propose models for the solution of the fundamental problem of option replication subject to discrete trading, round lotting, and nonlinear transaction costs using state-of-the-art methods in deep reinforcement learning (DRL), including deep Q-learning, deep Q-learning with Pop-Art, and proximal policy optimization (PPO).

Each DRL model is trained to hedge a whole range of strikes, and no retraining is needed when the user changes to another strike within the range. The models are general, allowing the user to plug in any option pricing and simulation library and then train them with no further modifications to hedge arbitrary option portfolios. Through a series of simulations, the researchers show that the DRL models learn similar or better strategies as compared to delta hedging. Out of all models, PPO performs the best in terms of profit and loss, training time, and amount of data needed for training.

Access the full paper

Related Posts

Previous Post
Standard Chartered completes first cross-bank LC using blockchain
Next Post
Finadium: Planning for Securities Finance After COVID-19

Fill out this field
Fill out this field
Please enter a valid email address.

X

Reset password

Create an account