More Campus Resources

Useful Tools and Information

Language

Multilingual content from IBKR

# Reinforcement Learning in Trading – Part IV

###### Posted December 10, 2020 at 10:30 am
Ishan Shah
QuantInsti

See Part IPart II and Part III to get started.

## Q Table and Q Learning

Q table and Q learning might sound fancy, but it is a very simple concept.

At each time step, the RL agent needs to decide which action to take. What if the RL agent had a table which would tell her which action will give the maximum reward. Then simply select that action. This table is Q-table.

In the Q-table, the rows are the states (in this case, the days) and the actions are the columns (in this case, hold and sell). The values in this table are called the Q-values.

From the above Q-table, on 23 July, which action would RL agent take? Yes, that’s right. A “hold” action would be taken as it has a q-value of 0.966 which is greater than q-value of 0.954 for Sell action.

But how to create the Q-table?

Let’s create a Q-table with the help of an example. For simplicity sake, let us take the same example of price data from July 22 to July 31 2020. We have added the percentage returns and cumulative returns as shown below.

You have bought one stock of Apple a few days back and you have no more capital left. The only two choices for you are “hold” or “sell”. As a first step, you need to create a simple reward table.

If we decide to hold, then we will get no reward till 31 July and at the end, we get a reward of 1.09. And if we decide to sell on any day then the reward will be cumulative returns up to that day. The reward table (R-table) looks like below. If we let the RL model choose from the reward table, the RL model will sell the stock and gets a reward of 0.95.

But the price is expected to increase to \$106 on July 31 resulting in a gain of 9%. Therefore, you should hold on to the stock till then. We have to represent this information. So that the RL agent can make better decisions to Hold rather than Sell.

How to go about it? To help us with this, we need to create a Q table. You can start by copying the reward table into the Q table and then calculate the implied reward using the Bellman equation on each day for Hold action.

Stay tuned for the next installment in which Ishan will demonstrate the Bellman equation.