Abstract
OpenSHMEM provides a one-sided communication interface that allows for asynchronous, one-sided communication operations on data stored in a partitioned global address space. While communication in this model is efficient, synchronizations must currently be achieved through collective barriers or one-sided updates of sentinel locations in the global address space. These synchronization mechanisms can over-synchronize, or require additional communication operations, respectively, leading to high overheads. We propose a SHMEM extension that utilizes capabilities present in most high performance interconnects (e.g. communication events) to bundle synchronization information together with communication operations. Using this approach, we improve ping-pong latency for small messages by a factor of two, and demonstrate significant improvement to synchronization-heavy communication patterns, including all-to-all and pipelined parallel stencil communication.
Access provided by Autonomous University of Puebla. Download to read the full chapter text
Chapter PDF
Similar content being viewed by others
Keywords
- Processing Element
- Communication Operation
- Barrier Synchronization
- Parallel Programming Model
- High Performance Computing Application
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
OpenSHMEM implementation using portals 4. Website, http://code.google.com/p/portals-shmem/
Portals 4 open source implementation for InfiniBand. Website, http://code.google.com/p/portals4/
Alverson, R., Callahan, D., Cummings, D., Koblenz, B., Porterfield, A., Smith, B.: The Tera computer system. In: Proc. ACM Intl. Conf. on Supercomputing, ICS (June 1990)
Bariuso, R., Knies, A.: SHMEM user’s guide. Tech. Rep. SN-2516, Cray Research, Inc. (1994)
Barrett, B.W., Brightwell, R., Hemmert, K.S., Pedretti, K.T., Wheeler, K.B., Underwood, K.D.: Enhanced support for OpenSHMEM communication in Portals. In: Hot Interconnects, pp. 61–69. IEEE (2011)
Barrett, B.W., Brightwell, R., Hemmert, S., Pedretti, K., Wheeler, K., Underwood, K., Riesen, R., Maccabe, A.B., Hudson, T.: The portals 4.0.1 network programming interface. Tech. Rep. SAND2013-3181, Sandia National Laboratories (April 2013)
Berkeley UPC: Berkeley UPC user’s guide version 2.16.0. Tech. rep., U.C. Berkeley and LBNL (2013)
Bonachea, D.: GASNet specification, v1.1. Tech. Rep. UCB/CSD-02-1207, U.C. Berkeley (2002)
Bonachea, D., Nishtala, R., Hargrove, P., Yelick, K.: Efficient point-to-point synchronization in UPC. In: 2nd Conf. on Partitioned Global Address Space Programming Models, PGAS 2006 (October 2006)
Bruck, J., Ho, C.T., Upfal, E., Kipnis, S., Weathersby, D.: Efficient algorithms for all-to-all communications in multiport message-passing systems. IEEE Trans. Parallel Distrib. Syst. 8(11), 1143–1156 (1997)
Chamberlain, B., Callahan, D., Zima, H.: Parallel programmability and the Chapel language. Intl. J. High Performance Computing Applications (IJHPCA) 21(3), 291–312 (2007)
Culler, D., Dusseau, A., Goldstein, S., Krishnamurthy, A., Lumetta, S., von Eicken, T., Yelick, K.: Parallel programming in Split-C. In: Proc., Supercomputing 1993, pp. 262–273 (1993)
von Eicken, T., Culler, D.E., Goldstein, S.C., Schauser, K.E.: Active messages: a mechanism for integrated communication and computation. In: Proc. 19th Intl. Symp. on Computer Architecture, ISCA 1992, pp. 256–266 (1992)
Feo, J., Harper, D., Kahan, S., Konecny, P.: ELDORADO. In: Proc. 2nd Conf. on Computing Frontiers, CF 2005 (2005)
GASPI Consortium: GASPI: Global address space programming interface specification of a PGAS API for communication. Version 1.00 (June 2013)
Mattson, T., van der Wijngaart, R.: Parallel research kernels. Website (2013), https://github.com/ParRes/Kernels
MPI Forum: MPI: A message-passing interface standard version 3.0. Tech. rep., University of Tennessee, Knoxville (September 2012)
Nieplocha, J., Carpenter, B.: ARMCI: A portable remote memory copy library for distributed array libraries and compiler run-time systems. In: Rolim, J., et al. (eds.) IPPS-WS 1999 and SPDP-WS 1999. LNCS, vol. 1586, pp. 533–546. Springer, Heidelberg (1999)
OpenSHMEM Consortium: OpenSHMEM application programming interface, version 1.0 (January 2012)
Reed, D., Kanodia, R.: Synchronization with event counts and sequences. Communications of the ACM 22(2), 115–123 (1979)
Thakur, R., Rabenseifner, R., Gropp, W.: Optimization of collective communication operations in MPICH. International Journal of High Performance Computing Applications (IJHPCA) 19(1), 49–66 (2005)
UPC Consortium: UPC language specifications, v1.2. Tech. Rep. LBNL-59208, Lawrence Berkeley National Lab (2005)
Yarrow, M., van der Wijngaart, R.: Communication improvement for the LU NAS parallel benchmark: A model for efficient parallel relaxation schemes. Tech. Rep. NAS-97-032, NASA Ames Research Center (1997)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer International Publishing Switzerland
About this paper
Cite this paper
Dinan, J., Cole, C., Jost, G., Smith, S., Underwood, K., Wisniewski, R.W. (2014). Reducing Synchronization Overhead Through Bundled Communication. In: Poole, S., Hernandez, O., Shamis, P. (eds) OpenSHMEM and Related Technologies. Experiences, Implementations, and Tools. OpenSHMEM 2014. Lecture Notes in Computer Science, vol 8356. Springer, Cham. https://doi.org/10.1007/978-3-319-05215-1_12
Download citation
DOI: https://doi.org/10.1007/978-3-319-05215-1_12
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-05214-4
Online ISBN: 978-3-319-05215-1
eBook Packages: Computer ScienceComputer Science (R0)