Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Let's do a simple thought experiment. You have two selectors A and B. A takes 100x more time to evaluate than B (which is about how id and class selectors compare). You evaluate them equally often, say N times each. If the time B takes to evaluate is t, your total execution time is 101t.

Now someone comes along and makes B 10x faster while making A 2% slower (which is what the numbers above seem to show for the class selector). Now your execution time is 102.1t, which happens to be larger than 101t.

The upshot of all this is that if use frequencies are equal, performance improvements for slow things (even small ones) are worth a lot more than performance improvements for things that are already fast. Now obviously you have to weight by frequency of use, which may differ for different API consumers.

There's only one case in which speeding up already-fast stuff at the cost of slow stuff getting even slower is an obvious win regardless of use frequency: benchmarks averaged using geometric means. Which is the most popular averaging method, of course: see Dromaeo, V8 benchmark, PeaceKeeper, etc.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: