Abstract
This paper describes compiler techniques that can translate standard OpenMP applications into code for distributed computer systems. OpenMP has emerged as an important model and language extension for shared-memory parallel programming. However, despite OpenMP's success on these platforms, it is not currently being used on distributed system. The long-term goal of our project is to quantify the degree to which such a use is possible and develop supporting compiler techniques. Our present compiler techniques translate OpenMP programs into a form suitable for execution on a Software DSM system. We have implemented a compiler that performs this basic translation, and we have studied a number of hand optimizations that improve the baseline performance. Our approach complements related efforts that have proposed language extensions for efficient execution of OpenMP programs on distributed systems. Our results show that, while kernel benchmarks can show high efficiency of OpenMP programs on distributed systems, full applications need careful consideration of shared data access patterns. A naive translation (similar to OpenMP compilers for SMPs) leads to acceptable performance in very few applications only. However, additional optimizations, including access privatization, selective touch, and dynamic scheduling, resulting in 31% average improvement on our benchmarks.
Article PDF
Similar content being viewed by others
Avoid common mistakes on your manuscript.
References
OpenMP Forum, OpenMP: A Proposed Industry Standard API for Shared Memory Programming, “www.openmp.org” Technical Report (October 1997).
Message Passing Interface Forum, MPI: Message passing interface standard, www.mpi-forum.org (1999).
S. Dwarkadas, P. Keleher, H. Lu, R. Rajamony, W. Yu, C. Amza, A. L. Cox, and W. Zwaenepoel, Treadmarks: Shared Memory Computing on Networks of Workstations, IEEE Computer, 29(2):18–28 (February 1996).
J. M. Bull, Measuring Synchronization and Scheduling Overheads in OpenMP, Proc. of the European Workshop on OpenMP (EWOMP' 99) (October 1999).
R. Eigenmann, G. Gaertner, W. B. Jones, V. Aslot, M. Domeika, and B. Parady, SPEComp: A New Benchmark Suite for Measuring Parallel Computer Performance, Proc. of WOMPAT 2001, Workshop on OpenMP Applications and Tools, Lecture Notes in Computer Science 2104, pp. 1–10 (July 2001).
R. Crowell, Z. Cvetanovic, J. Harris, C. Nelson, J. Bircsak, P. Craig, and C. Offner, Extending OpenMP for NUMA Machines, Proc. of the IEEE/ACM Supercomputing' 2000: High Performance Networking and Computing Conference (SC'2000) (November 2000).
V. Schuster and D. Miles, Distributed OpenMP, Extensions to OpenMP for SMP Clusters, Proc. of the Workshop on OpenMP Applications and Tools (WOMPAT'2000) (July 2000).
T. S. Abdelrahman and T. N. Wong, Compiler Support for Data Distribution on NUMA Multiprocessors, J. Supercomputing, 12(4):349–371 (October 1998).
High Performance Fortran Forum, High Performance Fortran Language Specification, Version 1.0, Technical Report CRPC-TR92225, Houston, Texas (1993).
M. Sato, M. Hirano, Y. Tanaka, and S. Sekiguchi, OmniRPC: A Grid RPC Facility for Cluster and Global Computing in OpenMP, Proc. of the Workshop on OpenMP Applications and Tools (WOMPAT2001) (July 2001).
L. Smith and M. Bull, Development of Mixed Mode MPI / OpenMP Applications, Proc. of the Workshop on OpenMP Applications and Tools (WOMPAT2000) (July 2000).
M. Booth and K. Misegades, Microtasking: A New Way to Harness Multiprocessors, Cray Channels, pp. 24–27 (1986).
A. Basumallik, S.-J. Min, and R. Eigenmann, Towards OpenMP Execution on Software Distributed Shared Memory Systems, Int'l. Workshop on OpenMP: Experiences and Implementations (WOMPEI'02), Lecture Notes in Computer Science 2327, Springer-Verlag (May 2002).
P. Keleher, A. L. Cox, and W. Zwaenepoel, Lazy Release Consistency for Software Distributed Shared Memory, Proc. of the 19th Ann. Int'l. Symp. on Computer Architecture (ISCA'92), pp. 13–21 (1992).
V. Aslot, M. Domeika, R. Eigenmann, G. Gaertner, W. B. Jones, and B. Parady, SPEComp: A New Benchmark Suite for Measuring Parallel Computer Performance, Proc. of the Workshop on OpenMP Applications and Tools (WOMPAT2001), Lecture Notes in Computer Science 2104, pp. 1–10 (July 2001).
C. Amza, A. L. Cox, S. Dwarkadas, P. Keleher, H. Lu, R. Rajamony, W. Yu, and W. Zwaenepoel, TreadMarks: Shared Memory Computing on Networks of Workstations, IEEE Computer, 29(2):18–28 (February 1996).
A. Lo, T. Mowry, and C. Chan, Comparative Evaluation of Latency Tolerance Techniques for Software Distributed Shared Memory, Proc. Fourth Int'l. Symposium on High-Performance Computer Architecture (HPCA'98) (February 1998).
E. H. Gornish, E. D. Granston, and A. V. Veidenbaum, Compiler-directed Data Prefetching in Multiprocessors with Memory Hierarchies, Proceedings of ICS'90, Amsterdam, The Netherlands, Vol. 1, pp. 342–353 (June 1990).
C. W. Tseng, Compiler Optimizations for Eliminating Barrier Synchronization, Proc. of the 5th ACM Symposium on Principles and Practice of Parallel Programming (PPOPP'95) (July 1995).
J. Zhu and J. Hoeflinger, Compiling for a Hybrid Programming Model Using the LMAD Representation, Proc. of the 14th Annual Workshop on Languages and Compilers for Parallel Computing (LCPC2001) (August 2001).
M. J. Voss and R. Eigenmann, High-Level Adaptive Program Optimization with ADAPT, Proc. of the Symposium on Principles and Practice of Parallel Programming (2001).
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Min, SJ., Basumallik, A. & Eigenmann, R. Optimizing OpenMP Programs on Software Distributed Shared Memory Systems. International Journal of Parallel Programming 31, 225–249 (2003). https://doi.org/10.1023/A:1023090719310
Issue Date:
DOI: https://doi.org/10.1023/A:1023090719310