Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    primarykey
    data
    text
    <p>Perhaps the most elegant solution would be to make the Python code part of your MPI application. Then it would be able to directly send data (via MPI messages) to the rest of the MPI application as it will be part of it. There are two different approaches here:</p> <p>1) Insert the Python binary as rank 0 in your MPI job. In order to exclude it from participation in collective operations in <code>mpibinary</code>, you would have to make a subcommunicator that excludes rank 0 and use it for all further collective communication in <code>mpibinary</code>. The first step is the easy part. In Open MPI you would do:</p> <pre><code>mpirun --hostfile hosts -np 1 pythonbinary args : -np 32 mpibinary args </code></pre> <p>This is called MPMD (multiple programs multiple data) launch and it will start one copy of <code>pythonbinary</code> that will become rank 0 and also 32 copies of <code>mpibinary</code> that will become rank 1, rank 2, ... up to rank 32 (33 processes in total). Other MPI implementations also provide very similar mechanisms for MPMD launch. Then you'd use <code>MPI_Comm_split()</code> in order to create a new communicator that does not include the Python program. Splitting a communicator is a collective operation. That's why you have to call it both in your Python code and your C++ application. <code>MPI_Comm_split()</code> takes a "color" and a key and splits the communicator in multiple subcommunicators according to the different colors. Processes with the same color are then sorted based on the key value. You will most likely want to call it like this:</p> <p>in Python:</p> <pre><code>python_comm = mpi.mpi_comm_split(mpi.MPI_COMM_WORLD, 0, 0) </code></pre> <p>in C++:</p> <pre><code>int rank; MPI_Comm c_comm; MPI_Comm_rank(MPI_COMM_WORLD, &amp;rank); MPI_Comm_split(MPI_COMM_WORLD, 1, rank, &amp;c_comm); </code></pre> <p>By using the <code>rank</code> as key one guarantees that the order of the processes in <code>c_comm</code> will be the same as it was before the split, i.e. rank 1 from <code>MPI_COMM_WORLD</code> will become rank 0 in <code>c_comm</code>, rank 2 will become rank 1, etc.</p> <p>From now on the C++ application can use <code>c_comm</code> to perform collective operations as usual. In order to communicate between the Python and the C++ code, you still have to use <code>MPI_COMM_WORLD</code> and the Python code would still be rank 0 in it.</p> <p>2) Use the MPI-2 process management facilities. First you would run an MPI job that consists of the Python binary only:</p> <pre><code>mpirun --hostfile hosts -np 1 pythonbinary args </code></pre> <p>Then the Python binary would spawn the other MPI binary directly using <code>MPI_Comm_spawn()</code> with the desired number of new processes. The newly spawned processes will have their own <code>MPI_COMM_WORLD</code> and you would not need to use <code>MPI_Comm_split()</code>. Also the spawn operation will establish an intercommunicator that will allow the Python code to send messages to the other part of the MPI application.</p> <hr> <p>In both cases the <code>hosts</code> file would contain definition of all execution hosts that can execute the MPI binaries. You would also need to use one of the available Python MPI bindings.</p> <p>Note that you only need to add some MPI calls to your Python script like <code>MPI_Init</code>, <code>MPI_Finalize</code>, <code>MPI_Comm_split</code> and the relevant <code>MPI_Send</code>/<code>MPI_Recv</code>. You do not need to make it parallel. MPI is quite versatile in that allows you to not only use it for parallel worksharing but also as a general messaging framework. But please note that the Python bindings should use the same MPI library as the rest of the program.</p> <p>Another solution would be to use some message queuing library or file pooling (which is really a crude MQ implementation).</p>
    singulars
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. VO
      singulars
      1. This table or related slice is empty.
    2. VO
      singulars
      1. This table or related slice is empty.
    3. VO
      singulars
      1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload