Asset Classes

Free investment financial education

More Campus Resources

Useful Tools and Information

Language

Multilingual content from IBKR

# How to Create Kalman Filter in Python – Part IV

###### Posted February 18, 2021 at 3:21 am
Rekhit Pachanekar
QuantInsti

See Part I, Part II and  Part III of this series to get started with the Statistical terms and concepts used in Kalman Filter.

### Kalman Gain equation

Recall that we talked about the normal distribution in the initial part of this blog. Now, we can say that the errors, whether measurement or process, are random and normally distributed in nature. In fact, taking it further, there is a higher chance that the estimated values will be within one standard deviation from the actual value.

Now, Kalman gain is a term which talks about the uncertainty of the error in the estimate. Put it simply, we denote ρ as the estimate uncertainty.

Since we use σ as the standard deviation, we would denote the variance of the measurement σ2 due to the uncertainty as ⋎. Thus, we can write the Kalman Gain as,

In the Kalman filter, the Kalman gain can be used to change the estimate depending on the estimate measure.

Since we saw the computation of the Kalman gain, in the next equation we will understand how to update the estimate uncertainty.

Before we move to the next equation in the Kalman filter tutorial, we will see the concepts we have gone through so far. We first looked at the state update equation which is the main equation of the Kalman filter. We further understood how we extrapolate the current estimated value to the anticipated value which becomes the current estimate in the next step. The third equation is the Kalman gain equation which tells us how the uncertainty in the error plays a role in calculating the Kalman gain. Now we will see how we update the Kalman gain in the Kalman filter equation. Let’s move on to the fourth equation in the Kalman filter tutorial.

Stay tuned for the next installment, in which the Rekhit will estimate uncertainty update and extrapolation.