Close Navigation
Learn more about IBKR accounts
Reinforcement Learning in Trading – Part VI

Reinforcement Learning in Trading – Part VI

Posted January 27, 2021
Ishan Shah
QuantInsti

See Part IPart II,  Part III,  Part IV and Part V to get familiar with important concepts such as Bellman Equation, Deep Learning, mean-reverting strategy, Q Learning and Q Table.

How to train artificial neural networks?

We will use the concept of experience replay. You can store the past experiences of the agent in a replay buffer or replay memory. In layman language, this will store the state, action taken and reward received from it. And use this combination to train the neural network.


Key Challenges

There are mainly two issues which you have to consider while building the RL model. They are as follows:

Type 2 Chaos

This might feel like a science fiction concept but it is very real. While we are training the RL model, we are working in isolation. Here, the RL model is not interacting with the market. But once it is deployed, we don’t know how it will affect the market.

Type 2 chaos is essentially when the observer of a situation has the ability to influence the situation. This effect is difficult to quantify while training the RL model itself. However, it can be reasonably assumed that the RL model is still learning even when it is deployed and thus will be able to correct itself accordingly.

Noise in Financial Data

There are situations where the RL model could pick up random noise which is usually present in financial data and consider it as input which should be acted upon. This could lead to inaccurate trading signals.

While there are ways to remove noise, we have to be careful of the tradeoff between removing noise and losing important data.

While these issues are definitely not something to be ignored, there are various solutions available to reduce them and create a better RL model in trading.

Conclusion

We have only touched the surface of reinforcement learning with the introduction of the components which make up the reinforcement learning system. The next step would be to take this learning forward by implementing your own RL system to backtest and paper trade on real-world market data.

Visit QuantInsti to download practical code and to learn more about their Deep reinforcement learning educational materials: https://blog.quantinsti.com/reinforcement-learning-trading/.

Disclosure: Interactive Brokers

Information posted on IBKR Campus that is provided by third-parties does NOT constitute a recommendation that you should contract for the services of that third party. Third-party participants who contribute to IBKR Campus are independent of Interactive Brokers and Interactive Brokers does not make any representations or warranties concerning the services offered, their past or future performance, or the accuracy of the information provided by the third party. Past performance is no guarantee of future results.

This material is from QuantInsti and is being posted with its permission. The views expressed in this material are solely those of the author and/or QuantInsti and Interactive Brokers is not endorsing or recommending any investment or trading discussed in the material. This material is not and should not be construed as an offer to buy or sell any security. It should not be construed as research or investment advice or a recommendation to buy, sell or hold any security or commodity. This material does not and is not intended to take into account the particular financial conditions, investment objectives or requirements of individual customers. Before acting on this material, you should consider whether it is suitable for your particular circumstances and, as necessary, seek professional advice.

IBKR Campus Newsletters

This website uses cookies to collect usage information in order to offer a better browsing experience. By browsing this site or by clicking on the "ACCEPT COOKIES" button you accept our Cookie Policy.