The actual power of Kalman filter lies in its ability to estimate states/variables that can't be measured directly, not just filtering time-series data.
With Kalman filter, you not only get your output filtered, but along with the state of the system at each step as well. No amount of low pass filtering can achieve that.
Going down the rabbit hole indeed... Kalman filters are Linear Quadratic Estimators (LQE) in control theory. Very rich field with very powerful techniques.
> No amount of low pass filtering can achieve that.
Can't a simple IIR "low pass" do that for certain systems once they reach a steady state? I don't remember exactly when the KF becomes an IIR but intuitively for certain systems I kinda think the constants would stabilize after a while and then if you just use those you get to optimality if the system stabilizes.
But you cannot estimate values of missing states (unmeasured) with just a state space representation even under conditions of observability. You need a state estimator for that, and the Kalman filter such an estimator.
Also, while it’s true the covariance matrix is initialized with diagonals (we often don’t have good priors), the values are being dynamically re-estimated from measurements at every iteration. The initial parameters are just that — initial.
If certain assumptions are true, ie Gaussian noise and LTI model is approximately correct, the covariance matrix converges to a stable covariance matrix that reflects the current reality. Some of those assumptions are relaxed with EKFs and UKFs but they’re essentially built on a Kalman framework (the competing one being MHE).
The Kalman filter is a state estimator, not just a filter. And it is used in industrial applications with advanced control systems (MPC).
> Also, while it’s true the covariance matrix is initialized with diagonals (we often don’t have good priors), the values are being dynamically re-estimated from measurements at every iteration. The initial parameters are just that — initial.
The P covariance matrix does not depend on online input though, right? So the initial value of covariance matrix Q and R together with model H and F decides what P will end at. I mean, there is no online parameter estimation (dependent on input). Or am I getting it wrong?
I sure agree to that Kalman filter could be useful. But I don't like how they are presented.
Like the author of the blog writes:
"At times its ability to extract accurate information seems almost magical"
You’re correct. The Kalman gain in the linear case can be computed offline since it’s only a function of model parameters. The P matrix is updated at every iteration, but it is also only a function of Q and R (which is determined offline) and the previous P, which at time 0 is initialized with diagonals. P does represent the covariance of (xpred - xact) but it doesn’t take in inputs at every iteration. I appreciate that call out.
Like most things, in practical implementations there’s a bunch of extra things that go beyond the basic theory you have to do like reestimating the initial P every few iterations as well as reestimating Q and R with online data.
It’s not magic and yes it is a form of recursive linear regression but it does work in real life.
Yes the P covariance matrix does depend on online input, anyone who has ever used them would know so perhaps you have no idea what you are talking about.
If instead of making such posts you would like to learn more then:
The P matrix is changed both through the model and the Kalman gain, where in both of these steps it depends on online input (model can depend on the state) and the Kalman gain depends on sensor measurements.
No, this is a common misunderstanding. If you have two noisy measurements of the same quantity, and you combine them, you take a weighted average of the two measurements to get a mean, but the variance of your estimate depends only on the variance of the measurements.
You would intuitively expect that if your two measurements are very far, that you should be less confident in the estimate. A Kalman filter does not do that.
What you can calculate is the likelihood of your data given the estimate, and that is going to be small if your two measurements are far.
The way you make it sound is that a KF would trust a measurement more or less depending on its value. That is simply not true. It does not, e.g., reject outliers.
I didn't mention anything about your "common misunderstanding" which I agree with and if you would want to do outlier detection or anything like that you would need to add it separately to your codebase (which is not that difficult to do)
My point was and still is that thinking about Kalman filters just as a measurement filtering system misses the point. An important value of measurements(especially in the more interesting formulations of KF such as the UKF) is updating the covariance matrices over time which affects the priori distributions of the state over time and thus affects your state estimation.
But it still sounds like you're saying the Kalman filter updates the state covariance matrix based on the measurement, which I'm taking to mean the nominal value of a measurement.
Okay, but its not very useful is it? Sure, you can propagate the state vectors directly, but even tiny amount of noise may be amplified and can make the results useless. Also, read up on why Euler Method is bad for solving ODEs numericaly.
With Kalman filter, you not only get your output filtered, but along with the state of the system at each step as well. No amount of low pass filtering can achieve that.