Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Congratulations on the money, but it doesn't really proof anything, especially not (B). There were hundreds of people working on this, considering all sorts of possible sources of errors. Any bias that is "obvious and easy to undo" was priced in.

And the polls were actually not as bad – the difference was below the 3% from 2012. It just happened to swing the result.



By "obvious" I meant to ML practitioners (esp frequentists which is more typical for big data). Not everyone. So not priced in. It is not an efficient market. There are huge information asymmetries. I will give you that calling it ML 101 was over the top.

If you dig into the polling breakdown and compare the individual polls (states as well as national) with the actuals it is clear that the polls were bad. Just because the aggregate error was less off than it otherwise could have been is more luck than anything. It could be argued that you can assume a gaussian error distribution so a more precise aggregate is to be expected but I would argue that that is not a safe assumption.

TLDR; Just because it was close doesn't make it a good model.

As an aside. All of my Bayesian friends lost money and all of my Frequents friends made money :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: