Note that there are some explanatory texts on larger screens.

plurals
  1. POclEnqueueNDRange blocking on Nvidia hardware? (Also Multi-GPU)
    primarykey
    data
    text
    <p>On Nvidia GPUs, when I call <code>clEnqueueNDRange</code>, the program waits for it to finish before continuing. More precisely, I'm calling its equivalent C++ binding, <code>CommandQueue::enqueueNDRange</code>, but this shouldn't make a difference. This only happens on Nvidia hardware (3 Tesla M2090s) remotely; on our office workstations with AMD GPUs, the call is nonblocking and returns immediately. I don't have local Nvidia hardware to test on - we used to, and I remember similar behavior then, too, but it's a bit hazy.</p> <p>This makes spreading the work across multiple GPUs harder. I've tried starting a new thread for each call to enqueueNDRange using <code>std::async</code>/<code>std::finish</code> in the new C++11 spec, but that doesn't seem to work either - monitoring the GPU usage in nvidia-smi, I can see that the memory usage on GPU 0 goes up, then it does some work, then the memory on GPU 0 goes down and the memory on GPU 1 goes up, that one does some work, etc. My gcc version is 4.7.0.</p> <p>Here's how I'm starting the kernels, where increment is the desired global work size divided by the number of devices, rounded up to the nearest multiple of the desired local work size:</p> <pre><code>std::vector&lt;cl::CommandQueue&gt; queues; /* Population of queues happens somewhere cl::NDrange offset, increment, local; std::vector&lt;std::future&lt;cl_int&gt;&gt; enqueueReturns; int numDevices = queues.size(); /* Calculation of increment (local is gotten from the function parameters)*/ //Distribute the job among each of the devices in the context for(int i = 0; i &lt; numDevices; i++) { //Update the offset for the current device offset = cl::NDRange(i*increment[0], i*increment[1], i*increment[2]); //Start a new thread for each call to enqueueNDRangeKernel enqueueReturns.push_back(std::async( std::launch::async, &amp;cl::CommandQueue::enqueueNDRangeKernel, &amp;queues[i], kernels[kernel], offset, increment, local, (const std::vector&lt;cl::Event&gt;*)NULL, (cl::Event*)NULL)); //Without those last two casts, the program won't even compile } //Wait for all threads to join before returning for(int i = 0; i &lt; numDevices; i++) { execError = enqueueReturns[i].get(); if(execError != CL_SUCCESS) std::cerr &lt;&lt; "Informative error omitted due to length" &lt;&lt; std::endl } </code></pre> <p>The kernels definitely should be running on the call to <code>std::async</code>, since I can create a little dummy function, set a breakpoint on it in GDB and have it step into it the moment <code>std::async</code> is called. However, if I make a wrapper function for enqueueNDRangeKernel, run it there, and put in a print statement after the run, I can see that it takes some time between prints.</p> <p>P.S. The Nvidia dev zone is down due to hackers and such, so I haven't been able to post the question there.</p> <p><strong>EDIT:</strong> Forgot to mention - The buffer that I'm passing to the kernel as an argment (and the one I mention, above, that seems to get passed between the GPUs) is declared as using CL_MEM_COPY_HOST_PTR. I had been using CL_READ_WRITE_BUFFER, with the same effect happening.</p>
    singulars
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload