In the last 15 years, perhaps the most important development in financial markets has been algorithmic execution. Together with the fragmentation of liquidity across a multitude of new lit and dark trading venues, it’s led to an explosion in the number of execution options available to the buy-side. But the question remains: has all this led to better performance in trade execution?
Algo selection remains a thorny issue and has proved to be surprisingly hard to grapple with, even when using technological or statistical solutions. Measuring and reporting best execution continues to be loosely defined by regulators and is open to interpretation. This, coupled with a bewilderingly wide range of alternative algos to select from and the fact that the majority of total cost analysis (TCA) tools are provided either by the brokers or trading platform providers, has made algo-selection challenging for the buy-side. It has also left sell-side firms struggling to differentiate themselves from their peers.
New algorithms are launched into the market with great fanfare but data supporting the veracity of their performance has sometimes lagged behind. Third party vendors have tried to provide technology-based solutions to fill this gap. Independent best execution benchmarks have been created ranging from TCA tools to ranking the execution quality of brokerage firms on a “broker wheel”. Many Order Execution Management Systems (OEMS) vendors now provide these analyses directly to the buy-side trading desk through their execution platforms.
However, like all algorithmic approaches, these have several inherent flaws and while buy-side institutions are obliged to repeat the disclaimer that “past performance is no guarantee of future success”, this is just as apparent when benchmarking algos as it is when back-testing portfolios.
Ultimately, traditional tools and methodologies fail to address two fundamental questions: how can we determine what future performance will look like using only historical data, and how can we measure the impact that any individual trade has had on the market.
The first question stems from the fact that algos are optimized using historical time series data. This is a significant business, with the industry spending hundreds of millions of dollars on historical tick data, data capture and replay technology. And while it represents an important tool in verifying execution strategies, it can also be highly misleading when data is weak or of poor quality, or where it is unrepresentative of future market dynamics. Where either is true, then algos will not perform to their advertised benchmarks.
The second question is an existential issue that is defined by the fact that historical data only shows us the impact a trade has had, and not what would have happened had the trade not occurred. To achieve optimal execution, what is really needed is data that is representative of market dynamics the way they exist today or even better; how those dynamics may look in the future.
Agent-based simulations
Agent-based simulations provide a framework for replicating complex adaptive systems, which has led to traction in an alternative approach as agent-based methodology progressively get adopted by algo execution teams to better train their algos in a wider range of environments than is possible with purely historical data.
Like an “algo-gym”, agent-based simulations can help ensure execution algorithms are ready to be deployed in as wide a range of potential futures as possible. Rather than training algos by looking purely at the “surface” data produced by exchanges, simulations can re-create the data generating process, producing synthetic market data that is indistinguishable from a given real-world time series.
However, having recreated the underlying system, such tools have the significant advantage of parameter adjustment which enables users to construct any potential scenario they choose. By introducing multiple potential market dynamics, sell-side brokers can ensure their algos function under many types of stressed scenarios and are performant in the broadest range of eventualities.
This approach has other important advantages, but some understanding of the mechanics of these simulations is needed for them to become apparent. The individual “agents” inside the simulation are themselves autonomous trading algorithms, and the micro-level interactions of these entities, as they take trading decisions and submit orders to a venue, are what produces the synthetic market data.
An execution algo can be inserted into this agent-based framework while the simulation is running, not only showing how it reacts to other algorithms but also simulating the emergent behaviour of the market in response to such trades. For the first time, algos can be tested against adaptive strategies that may be taking advantage of a sell-side firm’s current execution strategy, causing market slippage and price degradation. By running large numbers of identical simulations, either with or without the order, a quantifiable sense of likely market impact can be calculated to ultimately help firms optimise every single trade they execute.