Note that there are some explanatory texts on larger screens.

plurals
  1. POData visibility in a multithreaded scenario
    text
    copied!<p>Yet another scenario, based on a previous question. In my opinion, its conclusion will be general enough to be useful to a wide audience. Quoting Peter Lawrey from <a href="https://stackoverflow.com/questions/8324064/java-arrays-synchronized-atomic-or-synchronized-suffices">here</a>:</p> <blockquote> <p>The synchronized uses a memory barrier which ensures ALL memory is in a consistent state for that thread, whether its referenced inside the block or not.</p> </blockquote> <p>First of all, my problem deals with <strong>data visibility only</strong>. That is, atomicity ("operation synchronization") is already guaranteed in my software, so <em>every write operation completes before any read operation on the same value</em>, and vice versa, and so on. So the question is only about the potentially cached values by threads.</p> <p>Consider 2 threads, <em>threadA</em> and <em>threadB</em>, and the following class:</p> <pre><code>public class SomeClass { private final Object mLock = new Object(); // Note: none of the member variables are volatile. public void operationA1() { ... // do "ordinary" stuff with the data and methods of SomeClass /* "ordinary" stuff means we don't create new Threads, we don't perform synchronizations, create semaphores etc. */ } public void operationB() { synchronized(mLock) { ... // do "ordinary" stuff with the data and methods of SomeClass } } // public void dummyA() { // synchronized(mLock) { // dummyOperation(); // } // } public void operationA2() { // dummyA(); // this call is commented out ... // do "ordinary" stuff with the data and methods of SomeClass } } </code></pre> <p>Known facts (they follow from my software's architecuture):</p> <ul> <li><code>operationA1()</code> and <code>operationA2()</code> are called by <em>threadA</em>, <code>operationB()</code> is called by <em>threadB</em></li> <li><code>operationB()</code> is the <strong>only method</strong> called by <em>threadB</em> in this class. Notice that <code>operationB()</code> is in a synchronized block.</li> <li><strong>very important</strong>: it is guaranteed that these operations are called in the following logical order: <code>operationA1()</code>, <code>operationB()</code>, <code>operationA2()</code>. It is guaranteed that every operation is completed before the previous one is called. This is due to a higher-level architectural synchronization (a message queue, but that's irrelevant now). As I've said, my question is related purely with <em>data visibility</em> (i.e. whether data copies are up-to-date or outdated e.g. due to the own cache of a thread).</li> </ul> <p>Based on the Peter Lawrey quote, the memory barrier in <code>operationB()</code> ensures that all memory will be in consistent state for <code>threadB</code> during <code>operationB()</code>. Therefore, e.g. if <em>threadA</em> has changed some values in <code>operationA1()</code>, these values will be written to the main memory from the cache of <em>threadA</em> by the time <code>operationB()</code> is started. <strong>Question #1</strong>: Is this correct?</p> <p><strong>Question #2</strong>: when <code>operationB()</code> leaves the memory barrier, the values changed by <code>operationB()</code> (and possibly cached by <em>threadB</em>) will be written back to main memory. <strong>But operationA2() will not be safe</strong> because noone asked <em>threadA</em> to synchronize with the main memory, right? So it doesn't matter that the changes of <code>operationB()</code> are now in main memory, because <em>threadA</em> might still have its cached copies from the time before <code>operationB()</code> was called.</p> <p><strong>Question #3</strong>: if my suspicion in Q.#2 is true, then check my source code again and uncomment the method <code>dummyA()</code>, and uncomment the <code>dummyA()</code> call in <code>operationA2()</code>. I know this may be bad practice in other respects, but does this make a difference? My (possibly faulty) assumption is as follows: <code>dummyA()</code> will cause <em>threadA</em> to update its cached data from main memory (due to the <code>mLock</code> synchronized block), so it will see all changes done by <code>operationB()</code>. That is, now everything is safe. On a side note, the logical order of method calls is as follows:</p> <ol> <li><code>operationA1()</code></li> <li><code>operationB()</code></li> <li><code>dummyA()</code></li> <li><code>operationA2()</code></li> </ol> <p>My conclusion: due to the synchronized block in <code>operationB()</code>, <em>threadB</em> will see the most up-to-date values of data that might have been changed before (e.g. in <code>operationA1()</code>). Due to the synchronized block in <code>dummyA()</code>, <em>threadA</em> will see the most up-to-date copies of data that were changed in <code>operationB()</code>. Is there any error in this train of thought?</p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload