Hacker News new | past | comments | ask | show | jobs | submit login

> Also, while it’s true the covariance matrix is initialized with diagonals (we often don’t have good priors), the values are being dynamically re-estimated from measurements at every iteration. The initial parameters are just that — initial.

The P covariance matrix does not depend on online input though, right? So the initial value of covariance matrix Q and R together with model H and F decides what P will end at. I mean, there is no online parameter estimation (dependent on input). Or am I getting it wrong?

I sure agree to that Kalman filter could be useful. But I don't like how they are presented.

Like the author of the blog writes: "At times its ability to extract accurate information seems almost magical"




You’re correct. The Kalman gain in the linear case can be computed offline since it’s only a function of model parameters. The P matrix is updated at every iteration, but it is also only a function of Q and R (which is determined offline) and the previous P, which at time 0 is initialized with diagonals. P does represent the covariance of (xpred - xact) but it doesn’t take in inputs at every iteration. I appreciate that call out.

Like most things, in practical implementations there’s a bunch of extra things that go beyond the basic theory you have to do like reestimating the initial P every few iterations as well as reestimating Q and R with online data.

It’s not magic and yes it is a form of recursive linear regression but it does work in real life.


Yes the P covariance matrix does depend on online input, anyone who has ever used them would know so perhaps you have no idea what you are talking about.

If instead of making such posts you would like to learn more then: The P matrix is changed both through the model and the Kalman gain, where in both of these steps it depends on online input (model can depend on the state) and the Kalman gain depends on sensor measurements.


No, this is a common misunderstanding. If you have two noisy measurements of the same quantity, and you combine them, you take a weighted average of the two measurements to get a mean, but the variance of your estimate depends only on the variance of the measurements.

You would intuitively expect that if your two measurements are very far, that you should be less confident in the estimate. A Kalman filter does not do that.

What you can calculate is the likelihood of your data given the estimate, and that is going to be small if your two measurements are far.

The way you make it sound is that a KF would trust a measurement more or less depending on its value. That is simply not true. It does not, e.g., reject outliers.


I didn't mention anything about your "common misunderstanding" which I agree with and if you would want to do outlier detection or anything like that you would need to add it separately to your codebase (which is not that difficult to do)

My point was and still is that thinking about Kalman filters just as a measurement filtering system misses the point. An important value of measurements(especially in the more interesting formulations of KF such as the UKF) is updating the covariance matrices over time which affects the priori distributions of the state over time and thus affects your state estimation.


But it still sounds like you're saying the Kalman filter updates the state covariance matrix based on the measurement, which I'm taking to mean the nominal value of a measurement.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: