📄 New paper “Deep Reinforcement Learning for Active High Frequency Trading”

in hft •  4 years ago 

Antonio Briola, Jeremy Turiel, Riccardo Marcaccioli, Tomaso Aste

We introduce the first end-to-end Deep Reinforcement Learning based framework for active high frequency trading. We train DRL agents to to trade one unit of Intel Corporation stocks by employing the Proximal Policy Optimization algorithm. The training is performed on three contiguous months of high frequency Limit Order Book data. In order to maximise the signal to noise ratio in the training data, we compose the latter by only selecting training samples with largest price changes. The test is then carried out on the following month of data. Hyperparameters are tuned using the Sequential Model Based Optimization technique. We consider three different state characterizations, which differ in the LOB-based meta-features they include. Agents learn trading strategies able to produce stable positive returns in spite of the highly stochastic and non-stationary environment, which is remarkable itself. Analysing the agents' performances on the test data, we argue that the agents are able to create a dynamic representation of the underlying environment highlighting the occasional regularities present in the data and exploiting them to create long-term profitable trading strategies.

https://arxiv.org/abs/2101.07107

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!