The trick here is that if the election is close enough that you'd actually want/need multiple pollsters aggregated, the aggregators will indicate high uncertainty. If it's enough of a blowout for the aggregators to indicate low uncertainty, then the individual polls are going to be showing a large gap.
An aggregator saying "foo has a 65% chance of winning" may seem like it's providing more information than a single historically reliable poll (say Reuters/Ipsos) stating "foo is up by 2 points but there's a 3 point margin of error" - but isn't it just an illusion? High quality pollsters very seldom deviate very much.
And even if you grant that the aggregator is closer to being "right" than any single pollster, is that difference actually meaningful enough to impact any real world behaviors? Would you do anything differently with a 50% chance of victory versus a 70% chance?
I've honestly come to think of them as entertainment, with no real value.
An aggregator saying "foo has a 65% chance of winning" may seem like it's providing more information than a single historically reliable poll (say Reuters/Ipsos) stating "foo is up by 2 points but there's a 3 point margin of error" - but isn't it just an illusion? High quality pollsters very seldom deviate very much.
And even if you grant that the aggregator is closer to being "right" than any single pollster, is that difference actually meaningful enough to impact any real world behaviors? Would you do anything differently with a 50% chance of victory versus a 70% chance?
I've honestly come to think of them as entertainment, with no real value.