Note that there are some explanatory texts on larger screens.

plurals
  1. POC code loop performance [continued]
    primarykey
    data
    text
    <p>This question continues on my question here (on the advice of Mystical):</p> <p><a href="https://stackoverflow.com/questions/9992054/c-code-loop-performance">C code loop performance</a></p> <hr> <p>Continuing on my question, when i use packed instructions instead of scalar instructions the code using intrinsics would look very similar:</p> <pre><code>for(int i=0; i&lt;size; i+=16) { y1 = _mm_load_ps(output[i]); … y4 = _mm_load_ps(output[i+12]); for(k=0; k&lt;ksize; k++){ for(l=0; l&lt;ksize; l++){ w = _mm_set_ps1(weight[i+k+l]); x1 = _mm_load_ps(input[i+k+l]); y1 = _mm_add_ps(y1,_mm_mul_ps(w,x1)); … x4 = _mm_load_ps(input[i+k+l+12]); y4 = _mm_add_ps(y4,_mm_mul_ps(w,x4)); } } _mm_store_ps(&amp;output[i],y1); … _mm_store_ps(&amp;output[i+12],y4); } </code></pre> <p>The measured performance of this kernel is about 5.6 FP operations per cycle, although i would expect it to be exactly 4x the performance of the scalar version, i.e. 4.1,6=6,4 FP ops per cycle. </p> <p>Taking the move of the weight factor into account (thanks for pointing that out), the schedule looks like:</p> <p><img src="https://i.stack.imgur.com/vhKri.png" alt="schedule"></p> <p>It looks like the schedule doesn't change, although there is an extra instruction after the <code>movss</code> operation that moves the scalar weight value to the XMM register and then uses <code>shufps</code> to copy this scalar value in the entire vector. It seems like the weight vector is ready to be used for the <code>mulps</code> in time taking the switching latency from load to the floating point domain into account, so this shouldn't incur any extra latency.</p> <p>The <code>movaps</code> (aligned, packed move),<code>addps</code> &amp; <code>mulps</code> instructions that are used in this kernel (checked with assembly code) have the same latency &amp; throughput as their scalar versions, so this shouldn't incur any extra latency either.</p> <p>Does anybody have an idea where this extra cycle per 8 cycles is spent on, assuming the maximum performance this kernel can get is 6.4 FP ops per cycle and it is running at 5.6 FP ops per cycle?</p> <hr> <p>By the way here is what the actual assembly looks like:</p> <pre><code>… Block x: movapsx (%rax,%rcx,4), %xmm0 movapsx 0x10(%rax,%rcx,4), %xmm1 movapsx 0x20(%rax,%rcx,4), %xmm2 movapsx 0x30(%rax,%rcx,4), %xmm3 movssl (%rdx,%rcx,4), %xmm4 inc %rcx shufps $0x0, %xmm4, %xmm4 {fill weight vector} cmp $0x32, %rcx mulps %xmm4, %xmm0 mulps %xmm4, %xmm1 mulps %xmm4, %xmm2 mulps %xmm3, %xmm4 addps %xmm0, %xmm5 addps %xmm1, %xmm6 addps %xmm2, %xmm7 addps %xmm4, %xmm8 jl 0x401ad6 &lt;Block x&gt; … </code></pre>
    singulars
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload