Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    primarykey
    data
    text
    <p>So I'm a big fan of using <a href="http://www.mcs.anl.gov/research/projects/mpi/www/www3/MPI_Type_create_subarray.html" rel="nofollow">MPI_Type_create_subarray</a> for pulling out slices of arrays; it's easier to keep straight than vector types. In general, you can't use a single vector type to describe multi-d guardcells (because there are multiple strides, you need to create vectors of vectors), but I think because you're only using 1 guardcell in each direction here that you're ok.</p> <p>So let's consider the x-face GC; here you're sending an entire y-z plane to your x-neighbour. In memory, this looks like this given your array layout:</p> <pre><code> +---------+ | @| | @| | @| | @| z=2 | @| +---------+ | @| | @| | @| z=1 | @| | @| +---------+ | @| ^| @| || @| z=0 y| @| | @| +---------+ x-&gt; </code></pre> <p>so you're looking to send count=(ny*nz) blocks of 1 value, each strided by nx. I'm assuming here nx, ny, and nz include guardcells, and that you're sending the corner values. If you're not sending corner values, subarray is the way to go. I'm also assuming, crucially, that g->data is a <em>contiguous</em> block of nx*ny*nz*2 (or 2 contiguous blocks of nx*ny*nz) doubles, otherwise all is lost.</p> <p>So your type create should look like</p> <pre><code>MPI_Type_vector((g-&gt;ny*g-&gt;nz), 1, g-&gt;nx, MPI_DOUBLE, &amp;face1); MPI_Type_commit(&amp;face1); </code></pre> <p>Note that we are sending a total of count*blocksize = ny*nz values, which is right, and we are striding over count*stride = nx*ny*nz memory in the process, which is also right.</p> <p>Ok, so the y face looks like this:</p> <pre><code> +---------+ |@@@@@@@@@| | | | | | | z=2 | | +---------+ |@@@@@@@@@| | | | | z=1 | | | | +---------+ |@@@@@@@@@| ^| | || | z=0 y| | | | +---------+ x-&gt; </code></pre> <p>So you have nz blocks of nx values, each separated by stride nx*ny. So your type create should look like</p> <pre><code>MPI_Type_vector(g-&gt;nz, g-&gt;nx, (g-&gt;nx)*(g-&gt;ny), MPI_DOUBLE, &amp;face2); MPI_Type_commit(&amp;face2); </code></pre> <p>And again double-checking, you're sending count*blocksize = nz*nx values, striding count*stride = nx*ny*nz memory. Check.</p> <p>Finally, sending z-face data involves sending an entire x-y plane:</p> <pre><code> +---------+ |@@@@@@@@@| |@@@@@@@@@| |@@@@@@@@@| z=2 |@@@@@@@@@| |@@@@@@@@@| +---------+ | | | | | | z=1 | | | | +---------+ | | ^| | || | z=0 y| | | | +---------+ x-&gt; MPI_Type_vector(1, (g-&gt;nx)*(g-&gt;ny), 1, MPI_DOUBLE, &amp;face3); MPI_Type_commit(&amp;face3); </code></pre> <p>And again double-checking, you're sending count*blocksize = nx*ny values, striding count*stride = nx*ny memory. Check.</p> <p><strong>Update</strong>:</p> <p>I didn't take a look at your Sendrecvs, but there might be something there, too. Notice that you have to use a pointer to the first piece of data you're sending with an vector data type. </p> <p>First off, if you have array size nx in the x direction, and you have two guardcells (one on either side), your left guardcell is 0, right is nx-1, and your 'real' data extends from 1..nx-2. So to send your westmost data to your west neighbour, and to receive into your eastmost guardcell from your east neighbour, you would want</p> <pre><code> /* Send to WEST receive from EAST */ MPI_Sendrecv(&amp;(g-&gt;data)[current][0][0][g-&gt;nx-2], 1, face1, g-&gt;west, westtag, &amp;(g-&gt;data)[current][0][0][0], 1, face1, g-&gt;east, westtag, MPI_COMM_WORLD, MPI_STATUS_IGNORE); /* Send to EAST receive from WEST */ MPI_Sendrecv(&amp;(g-&gt;data)[current][0][0][1], 1, face1, g-&gt;east, easttag, &amp;(g-&gt;data)[current][0][0][g-&gt;nx-1], 1, face1, g-&gt;west, easttag, MPI_COMM_WORLD, MPI_STATUS_IGNORE); </code></pre> <p>(I like to use different tags for each stage of communication, helps keep things sorted.)</p> <p>Likewise for the other directions.</p>
    singulars
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. VO
      singulars
      1. This table or related slice is empty.
    2. VO
      singulars
      1. This table or related slice is empty.
    3. VO
      singulars
      1. This table or related slice is empty.
    1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload