relaxed ordering as a signal

Antoine Morrier

Let's say we have two thread. One that give a "go" and one that wait a go to produce something.

Is this code correct or can I have an "infinite loop" because of cache or something like that?

std::atomic_bool canGo{false};

void producer() {
    while(canGo.load(memory_order_relaxed) == false);
    produce_data();
}

void launcher() {
    canGo.store(true, memory_order_relaxed);
}

int main() {
    thread a{producer};
    thread b{launcher};
}

If this code is not correct, is there a way to flush / invalidate the cache in standard c++?

PSkocik

A go signal like this will usually be in response to some memory changes that you'll want the target to see.

In other words, you'll usually want to give release/acquire semantics to such signaling.

That can be done either by using memory_order_release on the store and memory_order_acquire on the load, or by putting a release fence before the relaxed store and and an acquire fence after the relaxed load so that memory operations done by the signaller before the store are visible to the signallee (see for example, https://preshing.com/20120913/acquire-and-release-semantics/ or the C/C++ standard).


The way I remember the ordering of the fences is that, as far as I understand, shared memory operations among cores are effectively hardware implemented buffered IO that follows a protocol, and a release fence should sort of be like an output buffer flush and an acquire fence like an input buffer flush/sync.

Now if you flush your core's memory op output buffer before issuing a relaxed store, then when the target core sees the relaxed store, the preceding memory op messages must be available to it and all it needs to see those memory changes in its memory is to sync them in with an acquire fence after it sees the signalling store.

この記事はインターネットから収集されたものであり、転載の際にはソースを示してください。

侵害の場合は、連絡してください[email protected]

編集
0

コメントを追加

0

関連記事