Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    text
    copied!<p>Message Passing Interface (MPI) (<a href="http://www-unix.mcs.anl.gov/mpi/" rel="noreferrer">http://www-unix.mcs.anl.gov/mpi/</a>) is a highly scalable and robust library for parallel programming, geared original towards C but now available in several flavors <a href="http://en.wikipedia.org/wiki/Message_Passing_Interface#Implementations" rel="noreferrer">http://en.wikipedia.org/wiki/Message_Passing_Interface#Implementations</a>. While the library doesn't introduce new syntax, it provides a communication protocol to orchestrate the sharing of data between routines which are parallelizable.</p> <p>Traditionally, it is used in large cluster computing rather than on a single system for concurrency, although multi-core systems can certainly take advantage of this library.</p> <p>Another interesting solution to the problem of parallel programming is OpenMP, which is an attempt to provide a portable extension on various platforms to provide hints to the compiler about what sections of code are easily parallelizable.</p> <p>For example (<a href="http://en.wikipedia.org/wiki/OpenMP#Work-sharing_constructs" rel="noreferrer">http://en.wikipedia.org/wiki/OpenMP#Work-sharing_constructs</a>):</p> <pre><code>#define N 100000 int main(int argc, char *argv[]) { int i, a[N]; #pragma omp parallel for for (i=0;i&lt;N;i++) a[i]= 2*i; return 0; } </code></pre> <p>There are advantages and disadvantages to both, of course, but the former has proven to be extremely successful in academia and other heavy scientific computing applications. YMMV.</p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload