Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    primarykey
    data
    text
    <p>So determining the right IO strategy depends on a lot of factors. If you are just sending back a handful of eigenvalues, and you're stuck writing out ASCII, you might be best off just sending all the data back to process 0 to write. This is <em>not</em> normally a winning strategy, as it obviously doesn't scale; but if the amount of data is very small, it could well be better than the contention involved in trying to write out to a shared file (which is, again, harder with ASCII). </p> <p>Some code is below which will schlep the amount of data back to proc 0, assuming everyone has the same amount of data.</p> <p>Another approach would just be to have everyone write out their own ks and eigenvalues, and then as a postprocessing step once the program is finished, cat them all together. That avoids the MPI step, and (with the right filesystem) can scale up quite a ways, and is easy; whether that's better is fairly easily testable, and will depend on the amount of data, number of processors, and underlying file system.</p> <pre><code> program testio use mpi implicit none integer, parameter :: atom_count = 5 integer, parameter :: kpertask = 2 integer, parameter :: fileunit = 7 integer, parameter :: io_master = 0 double precision, parameter :: pi = 3.14159 integer :: totalk integer :: ierr integer :: rank, nprocs integer :: handle integer(kind=MPI_OFFSET_KIND) :: offset integer :: filetype integer :: j,k double precision, dimension(atom_count, kpertask) :: eigenvalues double precision, dimension(kpertask) :: ks double precision, allocatable, dimension(:,:):: alleigenvals double precision, allocatable, dimension(:) :: allks call MPI_INIT(ierr) call MPI_COMM_SIZE(MPI_COMM_WORLD, nprocs, ierr) call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr) totalk = nprocs*kpertask !! setup test data do k=1,kpertask ks(k) = (rank*kpertask+k)*1.d-4*PI do j=1,atom_count eigenvalues(j,k) = rank*100+j enddo enddo !! Everyone sends proc 0 their data if (rank == 0) then allocate(allks(totalk)) allocate(alleigenvals(atom_count, totalk)) endif call MPI_GATHER(ks, kpertask, MPI_DOUBLE_PRECISION, &amp; allks, kpertask, MPI_DOUBLE_PRECISION, &amp; io_master, MPI_COMM_WORLD, ierr) call MPI_GATHER(eigenvalues, kpertask*atom_count, MPI_DOUBLE_PRECISION, &amp; alleigenvals, kpertask*atom_count, MPI_DOUBLE_PRECISION, &amp; io_master, MPI_COMM_WORLD, ierr) if (rank == 0) then open(unit=fileunit, file='output.txt') do k=1,totalk WRITE(fileunit, *) allks(k), (alleigenvals(j,k), j = 1, atom_count) enddo close(unit=fileunit) deallocate(allks) deallocate(alleigenvals) endif call MPI_FINALIZE(ierr) end program testio </code></pre>
    singulars
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. VO
      singulars
      1. This table or related slice is empty.
    2. VO
      singulars
      1. This table or related slice is empty.
    3. VO
      singulars
      1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload