Note that there are some explanatory texts on larger screens.

plurals
  1. POcuda : Is shared memory always helpful?
    primarykey
    data
    text
    <p>When I read the programming guide, I got the feeling that shared memory will always improve the performance, but it seems not. I have two functions:</p> <pre><code>const int Ntimes=1; __global__ void testgl(float *A, float *C, int numElements){ int ti = threadIdx.x; int b0 = blockDim.x*blockIdx.x; if (b0+ti &lt; numElements){ for(int i=0;i&lt;Ntimes;i++){ A[b0+ti]=A[b0+ti]*A[b0+ti]*10-2*A[b0+ti]+1; } C[b0+ti] = A[b0+ti]*A[b0+ti]; } } __global__ void testsh(float *A, float *C, int numElements){ int ti = threadIdx.x; int b0 = blockDim.x*blockIdx.x; __shared__ float a[1024]; if (b0+ti &lt; numElements){ a[ti]=A[b0+ti]; } __syncthreads(); if (b0+ti &lt; numElements){ for(int i=0;i&lt;Ntimes;i++){ a[ti]=a[ti]*a[ti]*10-2*a[ti]+1; } C[b0+ti] = a[ti]*a[ti]; } } int main(void){ int numElements = 500000; size_t size = numElements * sizeof(float); // Allocate the host input float *h_A = (float *)malloc(size); float *h_B = (float *)malloc(size); // Allocate the host output float *h_C = (float *)malloc(size); float *h_D = (float *)malloc(size); // Initialize the host input for (int i = 0; i &lt; numElements; i++){ h_A[i] = rand()/(float)RAND_MAX; h_B[i] = h_A[i]; } // Allocate the device input float *d_A = NULL; cudaMalloc((void **)&amp;d_A, size); float *d_B = NULL; cudaMalloc((void **)&amp;d_B, size); float *d_C = NULL; cudaMalloc((void **)&amp;d_C, size); float *d_D = NULL; cudaMalloc((void **)&amp;d_D, size); //Copy to Device cudaMemcpy(d_A, h_A, size, cudaMemcpyHostToDevice); cudaMemcpy(d_B, h_B, size, cudaMemcpyHostToDevice); // Launch the Vector Add CUDA Kernel int threadsPerBlock = 1024; int blocksPerGrid =(numElements + threadsPerBlock - 1) / threadsPerBlock; testgl&lt;&lt;&lt;blocksPerGrid, threadsPerBlock&gt;&gt;&gt;(d_A, d_C, numElements); testsh&lt;&lt;&lt;blocksPerGrid, threadsPerBlock&gt;&gt;&gt;(d_B, d_D, numElements); // Copy the device resultto the host cudaMemcpy(h_C, d_C, size, cudaMemcpyDeviceToHost); cudaMemcpy(h_D, d_D, size, cudaMemcpyDeviceToHost); // Free device global memory cudaFree(d_A); cudaFree(d_B); cudaFree(d_C); cudaFree(d_D); // Free host memory free(h_A); free(h_B); free(h_C); free(h_D); // Reset the device and exit cudaDeviceReset(); return 0; } </code></pre> <p>If Ntimes is set to be 1, testgl costs 49us, and testsh costs 97us. If Ntimes is set to be 100, testgl costs 9.7ms, and testsh costs 8.9ms.</p> <p>I do not know why it's more than 100 times longer.</p> <p>So it seems the shared memory helps only when we want to do a lot of things in device, is that right?</p> <p>The card used here is <strong>GTX680</strong>.</p> <p>Thanks in advance.</p>
    singulars
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload