Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    text
    copied!<p>There are some cases where you can get programs to run at the full potential speed on the GPU with very little porting work from a plain CPU version, and this might be one of them.</p> <p>If it's possible for you to have a function like this:</p> <pre><code>void process_single_video_frame(void* part_of_frame) { // initialize variables ... intermediate_result_1 = function1(part_of_frame); intermediate_result_2 = function2(intermediate_result_1); intermediate_result_3 = function3(intermediate_result_2); store_results(intermediate_result_3); } </code></pre> <p><strong>and</strong> you can process many part_of_frames at the same time. Say, a few thousand,</p> <p><strong>and</strong> <code>function1()</code>, <code>function2()</code> and <code>function3()</code> go through pretty much the same code paths (that is, the program flow does not depend heavily on the contents of the frame),</p> <p>then, local memory may do all the work for you. Local memory is a type of memory that is stored in global memory. It is different from global memory in a subtle, yet profound way... The memory is simply interleaved in such a way that adjacent threads will access adjacent 32 bit words, enabling the memory access to be fully coalesced if the threads all read from the same location of their local memory arrays.</p> <p>The flow of your program would be that you start out by copying <code>part_of_frame</code> to a local array and prepare other local arrays for intermediate results. You then pass pointers to the local arrays between the various functions in your code.</p> <p>Some pseudocode:</p> <pre><code>const int size_of_one_frame_part = 1000; __global__ void my_kernel(int* all_parts_of_frames) { int i = blockIdx.x * blockDim.x + threadIdx.x; int my_local_array[size_of_one_frame_part]; memcpy(my_local_array, all_parts_of_frames + i * size_of_one_frame_part); int local_intermediate_1[100]; function1(local_intermediate_1, my_local_array); ... } __device__ void function1(int* dst, int* src) { ... } </code></pre> <p>In summary, this approach may let you use your CPU functions pretty much unchanged, as the parallelism does not come from creating parallelized versions of your functions, but instead by running the entire chain of functions in parallel. And this again is made possible by the hardware support for interleaving the memory in local arrays.</p> <p>Notes:</p> <ul> <li><p>The initial copy of the <code>part_of_frame</code> from global to local memory is not coalesced, but hopefully, you will have enough calculations to hide that.</p></li> <li><p>On devices of compute capability &lt;= 1.3, there is only 16KiB of local memory available per thread, which may not be enough for your <code>part_of_frame</code> and the other intermediate data. But on compute capability >= 2.0, this has bee expanded to 512KiB, which should be plenty.</p></li> </ul>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload