Wiki says "Substantially" better than LRU but actual results seem to show performance converging to same levels the larger the cache gets. (See page 5.)
All caches have equal hit rates in the limit when the size of the cache approaches infinity. For finite caches, ARC often wins. In practical experience I've found that a weighted ARC dramatically outperformed LRU for DNS RR caching, in terms of both hit rate and raw CPU time spent per access. This is because it was easy to code an ARC cache that had lock-free access to frequently referenced items; once an item had been promoted to T2 no locks were needed for most accesses. With LRU it's necessary to have exclusive access to the entire cache in order to evict something and add something else.
Of course there are more schemes than just LRU and ARC, and one can try to employ lock-free schemes more than I'm willing to do. This is just my experience.
ARC often wins against LRU, but there is a lot of left on the table compared to other policies. That's because they do capture some frequency, but not very well imho.
You can mitigate the exclusive lock using a write-ahead log approach [1] [2]. Then you record events into ring buffers, replay in batches, and have an exclusive tryLock. This works really well in practice and lets you do much more complex policy work without much less worry about concurrency.
I wrote a simulator and link to the traces. One unfortunate aspect is they did not provide a real comparison with LIRS, except in a table that includes an incorrect number. It comes off a little biased since LIRS outperforms ARC in most of their own traces.
https://en.wikipedia.org/wiki/Cache_replacement_policies#Clo...