Quantcast
Channel: Does hardware memory barrier make visibility of atomic operations faster in addition to providing necessary guarantees? - Stack Overflow
Viewing all articles
Browse latest Browse all 2

Does hardware memory barrier make visibility of atomic operations faster in addition to providing necessary guarantees?

$
0
0

TL;DR: In a producer-consumer queue does it ever make sense to put an unnecessary (from C++ memory model viewpoint) memory fence, or unnecessarily strong memory order to have better latency at the expense of possibly worse throughput?


C++ memory model is executed on the hardware by having some sort of memory fences for stronger memory orders and not having them on weaker memory orders.

In particular, if producer does store(memory_order_release), and consumer observes the stored value with load(memory_order_acquire), there are no fences between load and store. On x86 there are no fences at all, on ARM fences are put operation before store and after load.

The value stored without a fence will eventually be observed by load without a fence (possibly after few unsuccessful attempts)

I'm wondering if putting a fence on either of sides of the queue can make the value to be observed faster? What is the latency with and without fence, if so?

I expect that just having a loop with load(memory_order_acquire) and pause / yield limited to thousands of iterations is the best option, as it is used everywhere, but want to understand why.

Since this question is about hardware behavior, I expect there's no generic answer. If so, I'm wondering mostly about x86 (x64 flavor), and secondarily about ARM.


Example:

T queue[MAX_SIZE]std::atomic<std::size_t>   shared_producer_index;void producer(){   std::size_t private_producer_index = 0;   for(;;)   {       private_producer_index++;  // Handling rollover and queue full omitted       /* fill data */;      shared_producer_index.store(          private_producer_index, std::memory_order_release);      // Maybe barrier here or stronger order above?   }}void consumer(){   std::size_t private_consumer_index = 0;   for(;;)   {       std::size_t observed_producer_index = shared_producer_index.load(          std::memory_order_acquire);       while (private_consumer_index == observed_producer_index)       {           // Maybe barrier here or stronger order below?          _mm_pause();          observed_producer_index= shared_producer_index.load(             std::memory_order_acquire);          // Switching from busy wait to kernel wait after some iterations omitted       }       /* consume as much data as index difference specifies */;       private_consumer_index = observed_producer_index;   }}

Viewing all articles
Browse latest Browse all 2

Latest Images

Trending Articles



Latest Images