Keywords

1 Introduction

In this paper, we describe how we ported the LLVM OpenMP device runtime library to OpenMP 5.1 using only minor extensions not available in the standard. The OpenMP device runtime provides the OpenMP functionalities to the user and implementation code on the device, which in this context means on the GPU. As an example, it provides the OpenMP API routines as well as routines utilized by the compiler e.g., for worksharing loops.

Our work replaced the original LLVM OpenMP device runtime implemented in CUDA to allow for code reusibility between different targets, e.g. AMD and Nvidia. It further lowers the bar to entry for future targets that only need to provide a few target specific intrinsics and minimal glue code.

The OpenMP device runtime library can now be shipped with pre-built LLVM packages as they only need LLVM/Clang to build it; neither a target device nor vendor SDKs are required, which lowers the barrier to entry for OpenMP offloading. This work is a proof of concept for writing device runtime libraries in OpenMP, with identical functionality and performance to that available from CUDA or HIP compiled with the same LLVM version.

The remainder of the paper is organized as follows. We discuss background and motivation in Sect. 2. Section 3 presents our approach, which is followed by an evaluation in Sect. 4. Finally, we conclude the paper in Sect. 5.

2 Background

When compiling from one language to another, there are usually constructs that are straightforward in the former and complicated or verbose in the latter. For example, a single OpenMP construct #pragma omp parallel for is lowered into a non-trivial amount of newly introduced code in the application, including calls into a runtime that provides certain functionality, like dividing loop iterations. In this work, the input is OpenMP target offloading code, that is the OpenMP target directive and the associated code, and the output is ultimately Nvidia’s PTX or AMD’s GCN assembler.

2.1 Device Runtime Library

The LLVM OpenMP device runtime library contains the various functions the compiler needs to implement OpenMP semantics when the target is an Nvidia or AMD GPU. The original implementation in LLVM was in CUDA [8], compiled with Nvidia’s NVCC to PTX assembler which was linked with the application code to yield a complete program. The source was later adapted to compile alternatively as HIP, which is close enough to CUDA syntax for the differences to be worked around with macros. Prior to this work the device runtime was hence comprised of sources in a common and target dependent part. In order to let the target dependent compiler recognize the code, target dependent keywords (such as __device__ and __shared__ in CUDA) are replaced with macros (DEVICE and SHARED), and the header where these macros are defined will be included accordingly depending on the target. The basic idea is visualized in Listing 1.

figure a

This strategy works. For Nvidia offloading the source is compiled as CUDA, for AMDGPU offloading it is compiled as HIP. Both produce LLVM bitcode but with different final targets, Nvidia’s PTX and AMD’s GCN respectively. However, if a programming model does not adequately resemble CUDA, such as OpenCL or Intel’s DPC++ [3], the approach will become less straight forward.

What’s more, this setup requires vendor SDKs (such as CUDA Toolkit or ROCm Developer Tools) to compile the device runtime, which creates a barrier for the package managers of Linux distributions. In practice that means the LLVM OpenMP installed from Linux distributions does not support offloading out of the box because the package would require a dependence on the CUDA or ROCm package, among other things.

2.2 Compilation Flow of OpenMP Target Offloading in LLVM/Clang

The compilation of an OpenMP program with target offloading directives contains the following two passes (as shown in Fig. 1):

  • Host Code Compilation. This pass includes the regular compilation of code for the host and OpenMP offloading code recognition as preparation for the second pass. Offloading regions are replaced by calls to the corresponding host runtime library functions (e.g. __tgt_target for the directive target in LLVM OpenMP) with suitable arguments, such as the kernel function identifier, base pointers of each captured variables and the number of kernel function arguments. In addition, a fallback host version of the kernel function will be emitted in case target offloading fails at runtime.

  • Device Code Compilation. This pass utilizes the recognized OpenMP target offload regions, as well as related functions and captured variables, and then emits target dependent device code. This includes one entry kernel function per target region, global variables (potentially in different address spaces), and device functions, as well as some target dependent metadata. As part of this compilation the OpenMP device runtime library is linked into the user code as an LLVM bitcode library (dev.rtl.bc in the Fig. 1).

Fig. 1.
figure 1

Compilation flow of an OpenMP program with target offloading.

In addition to the target construct (as well as its combined variants), OpenMP provides the declare target directive which specifies that variables and functions are mapped onto a target device, and should hence be usable in device code. The declare variant directive can be used to specify a context, e.g., the compilation for a specific target, in which a specialized function variant should replace the base version.

2.3 Motivation

While the OpenMP device runtime library can be implemented in any language it should be linked into the application in LLVM bitcode format for performance reasons. This setup, shown in Fig. 1, allows to optimize the runtime together with the application, effectively specializing a generic runtime as needed.

Given that the base language is irrelevant as long as we can compile to LLVM bitcode, OpenMP comes to mind as a portable and performant way to write code for different accelerators. As almost the entire device library can be interpreted as C++ code, rather than a CUDA or HIP code base, the compilation as OpenMP is feasible, in particular because LLVM/Clang is a working C++ and OpenMP compiler already.

Since OpenMP 5.1 all conceptually necessary building blocks are present in the language specification:

  • The declare target directive can be used to compile for a device, hence to generate LLVM bitcode that is targeting Nvidia’s PTX or AMD’s GCN. As we do not need a host version at all, we can even use the LLVM/Clang flag -fopenmp-is-device to invoke only the device compilation pass described in Sect. 2.2.

  • The declare variant directive can be used if a target requires a function implementation or global variable definition different from the default.

  • The allocate directive provides access to the different kinds of memory on the GPU.

For an additional target architecture, the work done in the compiler backend to emit code for that architecture will allow one to retarget an OpenMP implemented device runtime almost without any additional effort. The incremental development cost is reduced from (re)implementing the device runtime in a language that can be compiled to the new architecture to providing a few declare variant specialisations.

Finally, if the port uses compiler intrinsics instead of CUDA or HIP functions for the small target dependent part, it can be compiled without a vendor specific SDK present. This unblocks shipping offloading as part of Linux distributions.

3 Implementation

In this section, we describe the new LLVM OpenMP device runtime implemented with OpenMP 5.1. First, we talk about the common part, and then discuss how target dependent parts are implemented and why extensions were necessary. Only AMD and Nvidia platforms are discussed as other GPU architectures cannot be targeted by the community LLVM version at this time.

3.1 Common Part

Device Code

Using the declare target directive around all source files causes all functions and data to be emitted for the target device. Macros to indicate that functions or globals are for the device, as shown in Listing 1, are not needed.

Global Shared Variables

The implementation of the device runtime maps an OpenMP team to a thread blockFootnote 1 on the target device. Therefore, a shared variable visible to all threads in the same thread block is equivalent to a variable that can be accessed within the same OpenMP team. The allocate directive specifies how to allocate variables in different memory spaces. Uses with an allocator(omp_cgroup_mem_alloc)Footnote 2 we can place global variables in local shared memory, the equivalent of the CUDA __shared__ shown in Listing 1.

In contrast to shared CUDA or HIP variables, C++ specifies that global variables are default initialized. While we can technically do this for global shared variables defined with OpenMP, it is not supported by LLVM/Clang at this time. Furthermore, the performance is likely to suffer as the device runtime is designed to initialize these variables explicitly on demand. To this end, we extended LLVM/Clang with a variable attribute for this work: loader_uninitialized [1]. The effect is that annotated variables will not have a default initialized value but instead be uninitialized like the CUDA or HIP shared variables are as well.

Listing 2 shows device code and global shared variable declaration as it is used in our OpenMP device runtime.

figure b

Atomic Operations

The device runtime uses five atomic operations, add, inc, max, exchange, and cas, implemented in target dependent parts with LLVM/Clang builtin functions.

OpenMP 5.1 [4] introduces the compare clause, which supports conditional update statements. When combined with the capture clause, all of these atomic operations except inc can be implemented via OpenMP, as shown in Listing 3. We implemented the support of the compare clause and its combination with the capture clause for LLVM/Clang but the it has not been merged into the community version yet. With the updated requirements for flushFootnote 3, which we also implemented for this work, our OpenMP versions of atomic operations can generate LLVM-IR that is identical to the original target dependent implementation via compiler intrinsics.

figure c

The missing atomic operation is inc. According to the CUDA specification [2], inc implements:

figure d

and returns v. This atomic operation can not be represented in a form that OpenMP 5.1 supports because OpenMP 5.1 requires that the order operation be either or , and the alternative statement of the conditional expression statement must be x itself. Therefore, we still keep it in the target dependent part implemented with LLVM intrinsics as shown in Listing 4.

3.2 Target Specific Part

Target dependent global functions and variables are currently declared in a header and implemented in target dependent source files which are only compiled for the specific target, either as CUDA or HIP. A drawback of this method is that the creation of a device runtime for a new target might require us to remove a function from the common part and insert it into the target specific part if the existing (common) implementation is not suited for the new device.

Since OpenMP 5.0, the declare variant directive declares a specialized variant of a base function and specifies the context in which that specialized variant is used. It supports various context selector with the match clause, one of which is device selector. For example, with , the code wrapped in a begin/end declare variant region will be only generated if the target architecture matches the arch_name.

Listing 4 shows how the atomic inc function is implemented with target dependent compiler intrinsics selected via the begin/end declare variant directive for both Nvidia and AMD GPU targets.

Note that we use the match_any extension for Nvidia platforms as we support two distinct architectures, nvptx and nvptx64, but we do not want to distinguish between them in the device runtime. While this can be handled by duplicating the code, our new context selector changes the semantic of the matching to produce a match if any architecture in arch(nvptx, nvptx64) is targeted. By default a match would require all architectures to be targeted. In addition to match_any we extended LLVM/Clang with other useful context selectors, e.g., match_none and allow_templatesFootnote 4.

figure h

Other target dependent functions are required to handle synchronization, thread hierarchy, etc. These are implemented via compiler intrinsics, function calls to the corresponding native runtime library, or inline assembly.

4 Evaluation

In this section, we evaluated our proposed method in three ways: code comparison, functional testing, and performance evaluation.

4.1 Code Comparison

The previous implementation compiled CUDA to LLVM-IR, and HIP to LLVM-IR, while our proposed method compiles OpenMP to LLVM-IR for both platforms. The accuracy of the port to OpenMP was assessed by comparing the emitted LLVM IR of the library before and after changing over to OpenMP. If the text forms were identical, we would be certain the language change made no difference. This was not quite the case. The differences were in semantically unimportant metadata, symbol name mangling for variant functions, and the order of inlining as preferred by the language front end which had minor reordering effects on PTX and GCN generation.

4.2 Functional Testing

There are a number of OpenMP test suites and applications in use for checking the behaviour of the compiler, including SOLLVE V&V [7], and Ovo [5]. All ran identically with the new OpenMP runtime as they had using the previous device runtime.

4.3 Performance Evaluation

Systems Configuration. We evaluate the performance of our method experimentally on the Summit supercomputer. Each Summit node contains two IBM POWER9 processors and six Nvidia Volta V100 GPUs (only one was used in this paper). CUDA 10.1.243 was used, which is the version loaded by default.

Benchmarks. The SPEC ACCEL benchmark suite V1.3 was used to evaluate the new device runtime. Because support for Fortran is still in progress, we chose those benchmarks written in C. There are 15 OpenMP enabled benchmarks in SPEC ACCEL. Seven of them are in C, namely 503.postencil, 504.polbm, 514.pomriq, 552.pep, 554.pcg, 557.pcsp, and 570.pbt. 557.pcsp can not be compiledFootnote 5, therefore we only ran the other six benchmarks. We also chose a C++ proxy application, miniQMC [6].

-O2 compiler flag was used when compiling the benchmarks and application. Each test case was executed five times, and the execution time was averaged. miniQMC was measured through the miniqmc_sync_move benchmark executed as follows: miniqmc_sync_move -g "2 2 1".

Results. Figure 2 compares the execution time when the original device runtime is used with the execution time obtained using our proposed new device runtime. We can see that the execution times are almost identical, and for those cases where they are not same, the variance is less than \(1\%\) and assumed to be noise.

Fig. 2.
figure 2

Comparison between execution time of original device runtime and that of our proposed new device runtime on Nvidia platform.

The proxy application benchmark miniqmc_sync_move contains two target regions, evaluate_vgh and evaluateDetRatios. They are executed multiple times. Table 1 shows the profiling results (execution time) of each target region from Nvidia’s profiler nvprof. There is no performance difference between the two versions.

Table 1. Comparison of execution time of the two target regions in miniqmc_sync_move on Nvidia platform.

All the results above demonstrate that our proposed portable OpenMP device runtime can provide the same performance as the current CUDA-like version on the Nvidia platform. Based on the code comparison, functional testing and some AMD internal performance testing results, the portable runtime is believed to show no performance change from its HIP predecessor either.

5 Conclusions and Future Work

OpenMP works well as a language to implementing GPU-only code libraries. The direct support for memory allocators and the precise dispatch through declare variant are clear advantages over C++. While minimal compiler modifications were required to match the CUDA and HIP semantics to the fullest, we expect those to be incorporated into the OpenMP standard over time.

Using OpenMP is especially suitable as the vehicle for implementing an OpenMP runtime library since the main prerequisite is an OpenMP compiler which needs to be implemented all targets in any case. Since the library ships with the LLVM repository, it can be built by any distribution which has built Clang. Vendor SDKs or compilers are no longer required.

Since the host and device runtime libraries can build as part of LLVM, we will coordinate with Linux distribution developers to ensure that people who install the distribution LLVM package onto a system that has a target device and driver available will be able to get this working “out of the box”.