Note that there are some explanatory texts on larger screens.

plurals
  1. POHow do I achieve the theoretical maximum of 4 FLOPs per cycle?
    text
    copied!<p>How can the theoretical peak performance of 4 floating point operations (double precision) per cycle be achieved on a modern x86-64 Intel CPU?</p> <p>As far as I understand it take three cycles for an <a href="https://en.wikipedia.org/wiki/Streaming_SIMD_Extensions">SSE</a> <code>add</code> and five cycles for a <code>mul</code> to complete on most of the modern Intel CPUs (see for example <a href="http://agner.org/optimize/instruction_tables.pdf">Agner Fog's 'Instruction Tables'</a> ). Due to pipelining one can get a throughput of one <code>add</code> per cycle if the algorithm has at least three independent summations. Since that is true for packed <code>addpd</code> as well as the scalar <code>addsd</code> versions and SSE registers can contain two <code>double</code>'s the throughput can be as much as two flops per cycle.</p> <p>Furthermore, it seems (although I've not seen any proper documentation on this) <code>add</code>'s and <code>mul</code>'s can be executed in parallel giving a theoretical max throughput of four flops per cycle.</p> <p>However, I've not been able to replicate that performance with a simple C/C++ programme. My best attempt resulted in about 2.7 flops/cycle. If anyone can contribute a simple C/C++ or assembler programme which demonstrates peak performance that'd be greatly appreciated.</p> <p>My attempt:</p> <pre><code>#include &lt;stdio.h&gt; #include &lt;stdlib.h&gt; #include &lt;math.h&gt; #include &lt;sys/time.h&gt; double stoptime(void) { struct timeval t; gettimeofday(&amp;t,NULL); return (double) t.tv_sec + t.tv_usec/1000000.0; } double addmul(double add, double mul, int ops){ // Need to initialise differently otherwise compiler might optimise away double sum1=0.1, sum2=-0.1, sum3=0.2, sum4=-0.2, sum5=0.0; double mul1=1.0, mul2= 1.1, mul3=1.2, mul4= 1.3, mul5=1.4; int loops=ops/10; // We have 10 floating point operations inside the loop double expected = 5.0*add*loops + (sum1+sum2+sum3+sum4+sum5) + pow(mul,loops)*(mul1+mul2+mul3+mul4+mul5); for (int i=0; i&lt;loops; i++) { mul1*=mul; mul2*=mul; mul3*=mul; mul4*=mul; mul5*=mul; sum1+=add; sum2+=add; sum3+=add; sum4+=add; sum5+=add; } return sum1+sum2+sum3+sum4+sum5+mul1+mul2+mul3+mul4+mul5 - expected; } int main(int argc, char** argv) { if (argc != 2) { printf("usage: %s &lt;num&gt;\n", argv[0]); printf("number of operations: &lt;num&gt; millions\n"); exit(EXIT_FAILURE); } int n = atoi(argv[1]) * 1000000; if (n&lt;=0) n=1000; double x = M_PI; double y = 1.0 + 1e-8; double t = stoptime(); x = addmul(x, y, n); t = stoptime() - t; printf("addmul:\t %.3f s, %.3f Gflops, res=%f\n", t, (double)n/t/1e9, x); return EXIT_SUCCESS; } </code></pre> <p>Compiled with</p> <pre><code>g++ -O2 -march=native addmul.cpp ; ./a.out 1000 </code></pre> <p>produces the following output on an Intel Core i5-750, 2.66&nbsp;GHz.</p> <pre><code>addmul: 0.270 s, 3.707 Gflops, res=1.326463 </code></pre> <p>That is, just about 1.4 flops per cycle. Looking at the assembler code with <code>g++ -S -O2 -march=native -masm=intel addmul.cpp</code> the main loop seems kind of optimal to me:</p> <pre><code>.L4: inc eax mulsd xmm8, xmm3 mulsd xmm7, xmm3 mulsd xmm6, xmm3 mulsd xmm5, xmm3 mulsd xmm1, xmm3 addsd xmm13, xmm2 addsd xmm12, xmm2 addsd xmm11, xmm2 addsd xmm10, xmm2 addsd xmm9, xmm2 cmp eax, ebx jne .L4 </code></pre> <p>Changing the scalar versions with packed versions (<code>addpd</code> and <code>mulpd</code>) would double the flop count without changing the execution time and so I'd get just short of 2.8 flops per cycle. Is there a simple example which achieves four flops per cycle?</p> <p>Nice little programme by Mysticial; here are my results (run just for a few seconds though):</p> <ul> <li><code>gcc -O2 -march=nocona</code>: 5.6 Gflops out of 10.66 Gflops (2.1 flops/cycle)</li> <li><code>cl /O2</code>, openmp removed: 10.1 Gflops out of 10.66 Gflops (3.8 flops/cycle)</li> </ul> <p>It all seems a bit complex, but my conclusions so far:</p> <ul> <li><p><code>gcc -O2</code> changes the order of independent floating point operations with the aim of alternating <code>addpd</code> and <code>mulpd</code>'s if possible. Same applies to <code>gcc-4.6.2 -O2 -march=core2</code>.</p></li> <li><p><code>gcc -O2 -march=nocona</code> seems to keep the order of floating point operations as defined in the C++ source.</p></li> <li><p><code>cl /O2</code>, the 64-bit compiler from the <a href="http://www.microsoft.com/download/en/details.aspx?id=3138">SDK for Windows 7</a> does loop-unrolling automatically and seems to try and arrange operations so that groups of three <code>addpd</code>'s alternate with three <code>mulpd</code>'s (well, at least on my system and for my simple programme).</p></li> <li><p>My <a href="http://en.wikipedia.org/wiki/List_of_Intel_Core_i5_microprocessors#Based_on_Nehalem_microarchitecture">Core i5 750</a> (<a href="http://en.wikipedia.org/wiki/Nehalem_%28microarchitecture%29">Nahelem architecture</a>) doesn't like alternating add's and mul's and seems unable to run both operations in parallel. However, if grouped in 3's it suddenly works like magic.</p></li> <li><p>Other architectures (possibly <a href="http://en.wikipedia.org/wiki/Sandy_bridge">Sandy Bridge</a> and others) appear to be able to execute add/mul in parallel without problems if they alternate in the assembly code.</p></li> <li><p>Although difficult to admit, but on my system <code>cl /O2</code> does a much better job at low-level optimising operations for my system and achieves close to peak performance for the little C++ example above. I measured between 1.85-2.01 flops/cycle (have used clock() in Windows which is not that precise. I guess, need to use a better timer - thanks Mackie Messer).</p></li> <li><p>The best I managed with <code>gcc</code> was to manually loop unroll and arrange additions and multiplications in groups of three. With <code>g++ -O2 -march=nocona addmul_unroll.cpp</code> I get at best <code>0.207s, 4.825 Gflops</code> which corresponds to 1.8 flops/cycle which I'm quite happy with now.</p></li> </ul> <p>In the C++ code I've replaced the <code>for</code> loop with</p> <pre><code> for (int i=0; i&lt;loops/3; i++) { mul1*=mul; mul2*=mul; mul3*=mul; sum1+=add; sum2+=add; sum3+=add; mul4*=mul; mul5*=mul; mul1*=mul; sum4+=add; sum5+=add; sum1+=add; mul2*=mul; mul3*=mul; mul4*=mul; sum2+=add; sum3+=add; sum4+=add; mul5*=mul; mul1*=mul; mul2*=mul; sum5+=add; sum1+=add; sum2+=add; mul3*=mul; mul4*=mul; mul5*=mul; sum3+=add; sum4+=add; sum5+=add; } </code></pre> <p>And the assembly now looks like</p> <pre><code>.L4: mulsd xmm8, xmm3 mulsd xmm7, xmm3 mulsd xmm6, xmm3 addsd xmm13, xmm2 addsd xmm12, xmm2 addsd xmm11, xmm2 mulsd xmm5, xmm3 mulsd xmm1, xmm3 mulsd xmm8, xmm3 addsd xmm10, xmm2 addsd xmm9, xmm2 addsd xmm13, xmm2 ... </code></pre>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload