> You make local changes (on your processor, which has its local copy of memory). You then commit (write fence/atomic operation). No one else can see your changes until they update from source control and get the latest version (read fence).
That's actually not really true, what you are describing is a non-coherent system.
On a cc system, once a store is no longer speculative, it is guaranteed to be flushed out of the store buffer into the cache, and a load that reaches the cache layer is guaranteed to see the last version of the data that was stored in that memory location by any coherent agent in the system.
As pointed out elsethread, you need load barriers specifically to order your loads, so they are needed for ordering, not visibility.
The way that barriers contribute to visibility (and the reason that they need to be paired) is by giving conditionally guarantees: T: S.1; #StoreStore; S.2 T2: L2 #LoadLoad L.1. If T2 observers from its first load at memory location .2 the value stored by S.2, then it is guaranteed that L.1 will observer the value stored by S.1. So cache coherency plus purely local load and store barriers give you the global Acquire/Release model.
> On a cc system, once a store is no longer speculative, it is guaranteed to be flushed out of the store buffer into the cache,
That's the thing - I may be wrong, but I'm under the impression store buffers do not guarantee anything. I'm pretty certain they do not on Intel, for example. All you read in the docs is "flushed in a reasonable time", which can mean anything.
> and a load that reaches the cache layer is guaranteed to see the last version of the data that was stored in that memory location by any coherent agent in the system.
Yes.
> As pointed out elsethread, you need load barriers specifically to order your loads, so they are needed for ordering, not visibility.
Mmm - again, I may be wrong; but I think also no guarantees about behaviour of handling of cache invalidation requests. Given no guarantee, then in theory the invalidation request is never handled (unless you force it to be with a read barrier).
When I need a set of data to be written (say a struct - something bigger than you can manage in an atomic op), I'll write to the struct, fence, then perform a throw-away atomic op. The atomic op forces a write (a normal write could just be deferred and not force completion of pre-barrier writes) and then I know my struct has gone out past the store buffer and has reached cache control.
The cpu still need to guarantee forward progress. The store buffer is always flushed as fast as possible (i.e. as soon as the cpu can acquire the cacheline in exclusive mode, which is guaranteed to happen in a finite time). I can't point you to the exact working in the intel docs as they are quite messy, but you can implement a perfectly C++11 compliant SPSC queue purely with load and stores on x86 without any fences or #LOCK operations.
What fences (and atomic RMWs) do on intel is to act as synchronization, preventing subsequent reads to complete before any store preceding the fence. This was originally implemented simply by stalling the pipeline, but these days I suspect the loads have an implicit dependency on a dummy store on the store buffer representing the fence (possibly reusing the alias detection machinery?).
> I can't point you to the exact working in the intel docs as they are quite messy, but you can implement a perfectly C++11 compliant SPSC queue purely with load and stores on x86 without any fences or #LOCK operations.
I would disagree. I think the Intel docs do not specify a guarantee of flushing, and so if the SP and SC are on different cores, I think then in principle (but not in practise) the SC could in fact never see what the SP emits.
That's actually not really true, what you are describing is a non-coherent system.
On a cc system, once a store is no longer speculative, it is guaranteed to be flushed out of the store buffer into the cache, and a load that reaches the cache layer is guaranteed to see the last version of the data that was stored in that memory location by any coherent agent in the system.
As pointed out elsethread, you need load barriers specifically to order your loads, so they are needed for ordering, not visibility.
The way that barriers contribute to visibility (and the reason that they need to be paired) is by giving conditionally guarantees: T: S.1; #StoreStore; S.2 T2: L2 #LoadLoad L.1. If T2 observers from its first load at memory location .2 the value stored by S.2, then it is guaranteed that L.1 will observer the value stored by S.1. So cache coherency plus purely local load and store barriers give you the global Acquire/Release model.