Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No, this is a common misunderstanding. If you have two noisy measurements of the same quantity, and you combine them, you take a weighted average of the two measurements to get a mean, but the variance of your estimate depends only on the variance of the measurements.

You would intuitively expect that if your two measurements are very far, that you should be less confident in the estimate. A Kalman filter does not do that.

What you can calculate is the likelihood of your data given the estimate, and that is going to be small if your two measurements are far.

The way you make it sound is that a KF would trust a measurement more or less depending on its value. That is simply not true. It does not, e.g., reject outliers.



I didn't mention anything about your "common misunderstanding" which I agree with and if you would want to do outlier detection or anything like that you would need to add it separately to your codebase (which is not that difficult to do)

My point was and still is that thinking about Kalman filters just as a measurement filtering system misses the point. An important value of measurements(especially in the more interesting formulations of KF such as the UKF) is updating the covariance matrices over time which affects the priori distributions of the state over time and thus affects your state estimation.


But it still sounds like you're saying the Kalman filter updates the state covariance matrix based on the measurement, which I'm taking to mean the nominal value of a measurement.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: