Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    text
    copied!<p>Upon further analysis of this, I believe this is (at least partially) caused by data alignment of the four pointers. This will cause some level of cache bank/way conflicts.</p> <p>If I've guessed correctly on how you are allocating your arrays, they <strong><em>are likely to be aligned to the page line</em></strong>.</p> <p>This means that all your accesses in each loop will fall on the same cache way. However, Intel processors have had 8-way L1 cache associativity for a while. But in reality, the performance isn't completely uniform. Accessing 4-ways is still slower than say 2-ways.</p> <p><strong>EDIT : It does in fact look like you are allocating all the arrays separately.</strong> Usually when such large allocations are requested, the allocator will request fresh pages from the OS. Therefore, there is a high chance that large allocations will appear at the same offset from a page-boundary.</p> <p><strong>Here's the test code:</strong></p> <pre><code>int main(){ const int n = 100000; #ifdef ALLOCATE_SEPERATE double *a1 = (double*)malloc(n * sizeof(double)); double *b1 = (double*)malloc(n * sizeof(double)); double *c1 = (double*)malloc(n * sizeof(double)); double *d1 = (double*)malloc(n * sizeof(double)); #else double *a1 = (double*)malloc(n * sizeof(double) * 4); double *b1 = a1 + n; double *c1 = b1 + n; double *d1 = c1 + n; #endif // Zero the data to prevent any chance of denormals. memset(a1,0,n * sizeof(double)); memset(b1,0,n * sizeof(double)); memset(c1,0,n * sizeof(double)); memset(d1,0,n * sizeof(double)); // Print the addresses cout &lt;&lt; a1 &lt;&lt; endl; cout &lt;&lt; b1 &lt;&lt; endl; cout &lt;&lt; c1 &lt;&lt; endl; cout &lt;&lt; d1 &lt;&lt; endl; clock_t start = clock(); int c = 0; while (c++ &lt; 10000){ #if ONE_LOOP for(int j=0;j&lt;n;j++){ a1[j] += b1[j]; c1[j] += d1[j]; } #else for(int j=0;j&lt;n;j++){ a1[j] += b1[j]; } for(int j=0;j&lt;n;j++){ c1[j] += d1[j]; } #endif } clock_t end = clock(); cout &lt;&lt; "seconds = " &lt;&lt; (double)(end - start) / CLOCKS_PER_SEC &lt;&lt; endl; system("pause"); return 0; } </code></pre> <hr> <p><strong>Benchmark Results:</strong></p> <h1>EDIT: Results on an <em>actual</em> Core 2 architecture machine:</h1> <p><strong>2 x Intel Xeon X5482 Harpertown @ 3.2 GHz:</strong></p> <pre><code>#define ALLOCATE_SEPERATE #define ONE_LOOP 00600020 006D0020 007A0020 00870020 seconds = 6.206 #define ALLOCATE_SEPERATE //#define ONE_LOOP 005E0020 006B0020 00780020 00850020 seconds = 2.116 //#define ALLOCATE_SEPERATE #define ONE_LOOP 00570020 00633520 006F6A20 007B9F20 seconds = 1.894 //#define ALLOCATE_SEPERATE //#define ONE_LOOP 008C0020 00983520 00A46A20 00B09F20 seconds = 1.993 </code></pre> <p>Observations:</p> <ul> <li><p><strong>6.206 seconds</strong> with one loop and <strong>2.116 seconds</strong> with two loops. This reproduces the OP's results exactly.</p></li> <li><p><strong>In the first two tests, the arrays are allocated separately.</strong> You'll notice that they all have the same alignment relative to the page.</p></li> <li><p><strong>In the second two tests, the arrays are packed together to break that alignment.</strong> Here you'll notice both loops are faster. Furthermore, the second (double) loop is now the slower one as you would normally expect.</p></li> </ul> <p>As @Stephen Cannon points out in the comments, there is very likely possibility that this alignment causes <strong><em>false aliasing</em></strong> in the load/store units or the cache. I Googled around for this and found that Intel actually has a hardware counter for <strong><em>partial address aliasing</em></strong> stalls:</p> <p><a href="http://software.intel.com/sites/products/documentation/doclib/stdxe/2013/~amplifierxe/pmw_dp/events/partial_address_alias.html">http://software.intel.com/sites/products/documentation/doclib/stdxe/2013/~amplifierxe/pmw_dp/events/partial_address_alias.html</a></p> <hr> <h1>5 Regions - Explanations</h1> <p><strong>Region 1:</strong></p> <p>This one is easy. The dataset is so small that the performance is dominated by overhead like looping and branching.</p> <p><strong>Region 2:</strong></p> <p><strike>Here, as the data sizes increases, the amount of relative overhead goes down and the performance "saturates". Here two loops is slower because it has twice as much loop and branching overhead.</strike></p> <p>I'm not sure exactly what's going on here... Alignment could still play an effect as Agner Fog mentions <a href="http://www.agner.org/optimize/blog/read.php?i=142">cache bank conflicts</a>. (That link is about Sandy Bridge, but the idea should still be applicable to Core 2.)</p> <p><strong>Region 3:</strong></p> <p>At this point, the data no longer fits in L1 cache. So performance is capped by the L1 &lt;-> L2 cache bandwidth.</p> <p><strong>Region 4:</strong></p> <p>The performance drop in the single-loop is what we are observing. And as mentioned, this is due to the alignment which (most likely) causes <strong><em>false aliasing</em></strong> stalls in the processor load/store units.</p> <p>However, in order for false aliasing to occur, there must be a large enough stride between the datasets. This is why you don't see this in region 3.</p> <p><strong>Region 5:</strong></p> <p>At this point, nothing fits in cache. So you're bound by memory bandwidth.</p> <hr> <p><img src="https://i.stack.imgur.com/ElCGL.png" alt="2 x Intel X5482 Harpertown @ 3.2 GHz"> <img src="https://i.stack.imgur.com/QMpwj.png" alt="Intel Core i7 870 @ 2.8 GHz"> <img src="https://i.stack.imgur.com/NpyhG.png" alt="Intel Core i7 2600K @ 4.4 GHz"></p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload