Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    text
    copied!<p>While it is possible to obtain the ID of the logical processor that currently executes the code, this often makes no sense unless you <strong>enable MPI process binding</strong>, also known as process pinning (in Intel's parlance). Binding (or pinning) restricts the CPU affinity set of each MPI process, i.e. the set of CPUs on which the process is allowed to execute. If the affinity set includes only a single logical CPU, then the process would only execute on that logical CPU. A logical CPU usually corresponds to a hardware thread on CPUs with SMT/hyperthreading or to a CPU core on non-SMT/non-hyperthreaded CPUs. Given affinity sets that include more than one logical CPUs, the scheduler is allowed to migrate the process around in order to keep the CPUs in the set equally busy. The default affinity set usually includes all available logical CPUs, that is the process could be scheduled for execution on any core or hardware thread.</p> <p>Only when MPI process binding is in place and each process is bound to a single logical CPU it makes sense to actually query the OS for the location of the process. You have to consult your MPI implementation manual on how to enable it. For example, with Open MPI you would do something like:</p> <pre><code>mpiexec --bind-to-core --bycore -n 120 ... </code></pre> <p><code>--bind-to-core</code> tells Open MPI to bind each process to a single CPU core and <code>--bycore</code> tells it to allocate cores consecutively on multisocket machines (that is, first to allocate all cores in the first socket, then in the second socket, etc.) With Intel MPI the binding (called <em>pinning</em> by Intel) is enabled by setting the environment variable <code>I_MPI_PIN</code> to <code>1</code>. The process placement strategy is controlled by the value of <code>I_MPI_PIN_DOMAIN</code>. To achieve the same as the above shown Open MPI command line, one would to the following with Intel MPI:</p> <pre><code>mpiexec -n 120 -env I_MPI_PIN 1 -env I_MPI_PIN_DOMAIN "core:compact" ... </code></pre> <p>To obtain the location of your processes in a platform-independent way, you could use <a href="http://www.open-mpi.org/projects/hwloc/doc/v1.6.2/a00053.php#ga8e9a4b5ee3eaa18fd3a229790c6b5b17" rel="nofollow"><code>hwloc_get_last_cpu_location()</code></a> from the <a href="http://www.open-mpi.org/projects/hwloc/doc/v1.6.2/" rel="nofollow"><strong>hwloc</strong> library</a>. It is developed as part of the Open MPI project but can be used as a stand-alone library. It provides an abstract interface to query the system topology and to manipulate the affinity of processes and threads. <strong>hwloc</strong> supports Linux, Windows and many other OSes.</p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload