Title: Problems with using MPI 1.1 and 2.0 as compilation targets for parallel language implementations

Authors: Dan Bonachea, Jason Duell

Addresses: Computer Science Division, University of California at Berkeley, Berkeley, CA, USA. ' Computer Science Division, University of California at Berkeley, Berkeley, CA, USA

Abstract: MPI support is nearly ubiquitous on high-performance systems today and is generally highly tuned for performance. It would thus seem to offer a convenient |portable network assembly language| to developers of parallel programming languages who wish to target different network architectures. Unfortunately, neither the traditional MPI 1.1 API nor the newer MPI 2.0 extensions for one-sided communication provide an adequate compilation target for global address space languages, and this is likely to be the case for many other parallel languages as well. Simulating one-sided communication under the MPI 1.1 API is too expensive, while the MPI 2.0 one-sided API imposes a number of significant restrictions on memory access patterns that would need to be incorporated at the language level, as a compiler cannot effectively hide them given current conflict and alias detection algorithms.

Keywords: MPI support; parallel languages; global address space; RMA; one-sided communication; high performance computing; memory access patterns; compilation targets; high performance networking.

DOI: 10.1504/IJHPCN.2004.007569

International Journal of High Performance Computing and Networking, 2004 Vol.1 No.1/2/3, pp.91 - 99

Published online: 05 Aug 2005 *

Full-text access for editors Full-text access for subscribers Purchase this article Comment on this article