Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    primarykey
    data
    text
    <p>The statement</p> <pre><code>READ(LIN,'(L3)') PRNTA </code></pre> <p>causes the program to read, from the unit attached to the channel with id <code>LIN</code>, a 3-character sequence which represents a logical value and assigns the value read to the variable <code>PRNTA</code>. From the fragments you've shown us the program will read <code>.F.</code> and set <code>PRNTA</code> to <code>.false.</code>.</p> <p><code>LIN</code> is set to the constant value 5, which usually means <code>stdin</code>. This use of <code>5</code> to denote <code>stdin</code> is not a <em>de jure</em> standard, it is more of a <em>de facto</em> standard.</p> <p>The straightforward way to read a parameter file into an MPI program is to ensure that only one process reads the file and then sends out the values to the other processes which need them.</p> <p>You seem to have written a program in which all processes try to read the same input file but, at run-time, the redirection you've used to pass <code>Modelfile.txt</code> is only working for one process (presumably the process with rank 0). The other processes are not getting an input file at all and are complaining, then bringing the program crashing down. The error message you show is typical of a Fortran program which doesn't find an input file at all when it tries to read.</p> <p>Far better to write code along the lines:</p> <pre><code>call mpi_init ... ... if (myrank==0) then open(...) inputfile read(...) parameters close(...) end if ... call mpi_bcast(parameters from 0 to all) ... </code></pre> <p>In general, don't expect the run-time environment for MPI processes to be identical copies of the run-time environment for a sequential program. I think that you are seeing evidence that your run-time directs the input only to the first process created when your program runs. Since <code>mpirun</code> is not standardised (though <code>mpiexec</code> is) I don't think you can rely on this run-time behaviour being the same for all MPI implementations. For portability and compatibility you're better off handling I/O explicitly within your program than using o/s features such as redirection.</p> <p>You could, rather than have process 0 read the parameters and distribute them to other processes, write your code such that each process reads the same file. If you do write your code this way take care to ensure that the processes aren't fighting over access to the I/O channels; having multiple processes trying to (nearly-)simultaneously read across a single input channel is a sure way to slow things down.</p>
    singulars
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. VO
      singulars
      1. This table or related slice is empty.
    2. VO
      singulars
      1. This table or related slice is empty.
    3. VO
      singulars
      1. This table or related slice is empty.
    1. CO@HighPermonaceMark: Dear Mark! Just out of curiosity, what is your experience? Is concurrent reading really slowing things done? Is it always the case or only for large files? Is it file system dependent? I know many codes (including ours) for which the (admittedly only few kB small) input file is read by all threads concurrently, and it does not seem to be any kind of bottleneck. (On the other hand, having written to the same file concurrently by mistake had a real huge impact).
      singulars
    2. CO@BálintAradi: Match these statements to yours: Extensive, including machines with 0000s of processors and parallel i/o. Concurrent access to small files by small numbers of process, probably not causing enough slow-down to fret about. Test and measure, don't take my word on this. Yes, o/s and file system dependent, and h/w dependent. YMMV, or, to be more accurate, your experience does vary. (Not surprised). Finally, advice offered in this answer is aimed at the neophyte OP, your footprint on SO suggests you are mature enough to figure most of this out for yourself.
      singulars
    3. COThanks! Sorry for my formulation above, I realized that it reads maybe somewhat offending, which it was definitely not meant to be. I did not question your experience. On the contrary, I askedfor advice in that matter, as we are at the beginning of our parallelization project, and I'd like to avoid making bad design decisions stamming from lack of experience on my side. Unfortunately, I do not have access to that large scale machines as you do, so my testing possibilities are rather limited.
      singulars
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload