Hacker News new | past | comments | ask | show | jobs | submit login

Nobody who thinks they can pin a probability on an event that only gets sampled once should ever be taken seriously.



Each race is only sampled once. But he runs the same methodology on hundreds of races. If the candidates he gives an X% chance to win go on to win approximately X% of the time, the methodology is reliable.


Not if the methodology is different per race, which it is. No other office is elected like the president is.


Have you ever heard of a brier score?

Or bayes theorem, and bayseian inference?

You might change your mind if you investigate those topics.


This is such tired debate that comes up whenever 538 is mentioned in discussion. Polls sample an election's outcome many times throughout the campaign. Statistics works. You can't know the future but you can predict it with error bars. 51/49 or 70/30 should tell you there's a very real chance of a Trump victory.

People get bent out of shape about 538 but it's usually because they're misinterpreting the prediction, not that the prediction is meaningless.


>Statistics works

it does but just because nate silver calls something "statistics" doesn't necessarily mean that it is. If Donald Trump has a 20% chance of winning that means that we can hold the election 100 times and expect that he'll win approximately 20 elections and lose the other 80. Which is ridiculous because each voter is not an independent random variable.


Each eligible voter is a random variable, independence aside. Silver's model does not operate on the level of the voter however. A random variable is a very flexible abstraction, almost anything can be a random variable if you can find a way to measure it. Polls are a measurement. They're imperfect, but all measurements contain error.

It's not statistics because any authority says it is, it's statistics because it starts with a measure of probability (aggregated polls) and builds an interpretation on top of it. That's what statistics is. It can be good statistics or bad statistics, but that's a separate question. Incidentally, it has a pretty good track record.

If you have a criticism of Silver's model I'm curious to hear it. However, as it is I don't see a criticism in your comment. You say it's ridiculous, but it isn't. It's complicated, not ridiculous. I can't say precisely what's on your mind, but if I had to guess, it doesn't make sense to you, which is a different matter.


I don't see the point in anything that isn't falsifiable, and in order for that to happen there needs to be a sample size greater than one. If Hillary has a 99.9% chance of winning and Donald has a 0.1% chance of winning, does that mean the model was wrong or does that mean we're in the 0.1% timeline?

If you want to prove that a coin flip has a 50% chance of landing heads, a 50% chance of landing on tails, and a negligible chance of landing on its edge, you can run as many tests as you want and observe that as N approaches infinity the number of heads converges on 0.5N and the number of tails converges on 0.5N. Alternatively, you might find that the coin isn't well-balanced, in which case you've proven that the "50/50 model" was not accurate.

You can't do that with elections because each election only happens once. Even if the same two candidates are running against each other in every election, the issues at stake are different and the voterbase is different. In reality one candidate has a 100% chance of victory and the other has a 0% chance of victory but we don't know which candidate it is.


And yet, the silver report final modal outcome was exactly the actual result.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: