1 Introduction

The Multiply and Accumulate (MAC) operation consumes three inputs a, b, c, and computes a = a + b ⋅ c. It is a fundamental step in many floating-point and integer computations. Examples are dot product calculations, matrix multiplications, and modular arithmetic. Modern processors offer instructions for performing MAC over floating-point inputs, e.g., AMD Bulldozer’s Fused Multiply-Add (FMA), and Intel’s Single Instruction Multiple Data (SIMD)-FMA (starting with the microarchitecture Codename Haswell). Here, we focus on Intel’s AVX512IFMA instructions [1] that compute MAC on unsigned integers.

The AVX512IFMA instructions are defined, but are not yet widely available. However, a demonstration of their capabilities is already given in [11], showing a 2x potential speedup for 1024-bit integer multiplication (and more for larger operands). Another example is [6], where we showed a 6x potential speedup over OpenSSL’s Montgomery Multiplication (MM). Additional code examples [5, 10] contributed to OpenSSL, include optimized 1024∕1536∕2048-bit MM. These demonstrations did not optimize modular squaring specifically; rather, they used a multiplication routine for squaring as well. Here, we show how to use the AVX512IFMA instructions for optimizing modular squaring. Our developments build on top of the AMS optimization of [7] (other squaring methods can be found in [4, 9, 13]).

The paper is organized as follows. Section 1.2 discusses some preliminaries. Section 1.3 deals with implementing the AMS algorithm with the AVX512IFMA instructions. In Sect. 1.4, we propose a potential improvement to the definition of AVX512IFMA. Finally, we show our experimental results in Sect. 1.5, and provide our conclusions in Sect. 1.6.

2 Preliminaries and Notation

Hereafter, we use lower case letters to represent scalars (64-bit integers), and upper case letters to represent 512-bit wide register. We denote zero extension of a 64 bits variable x by ZE(x).

2.1 The AVX512IFMA Instructions

Intel’s Software Developer Manual [1] introduces two instructions called AVX512IFMA: VPMADD52LUQ and VPMADD52HUQ. Their functionality is illustrated in Algorithm 1.1. These instructions multiply eight 52-bit unsigned integers residing in wide 512-bit registers, produce the low (VPMADD52LUQ) and high (VPMADD52HUQ) halves of the 104-bit products, and add the results to 64-bit accumulators (i.e., SIMD elements), placing them in the destination register. They are designed for supporting big number multiplications, when the inputs are stored in a “redundant representation” using radix 252 (as explained in [8]).

Algorithm 1.1 DST = VPMADD52(A,B,C) [1]

The AVX512IFMA instructions build on the existence of other instructions called SIMD-FMA, which are designed to support IEEE standard Floating-Point Arithmetic [12]. The SIMD-FMA instructions handle double-precision floating-point numbers (x[63 : 0]), where the bits are viewed as: (a) fraction x[51 : 0] (53 bits where only 52 bits are explicitly stored); (b) exponent x[62 : 52]; (c) sign bit x[63].

2.2 Almost Montgomery Multiplication

MM is an efficient technique for computing modular multiplications [14]. Let t be a positive integer, k an odd modulus and 0 ≤ a, b < k integers. We denote the MM by MM(a, b) = a ⋅ b ⋅ 2t (mod k), where 2t is the Montgomery parameter. A variant of MM, called Almost Montgomery Multiplication (AMM) [7], is defined as follows. Let k and t be defined as above, and 0 ≤ a, b < B integers, then AMM(a, b) is an integer U that satisfies: (1) U (mod m) = a ⋅ b ⋅ 2t (mod k); (2) U ≤ B.

The advantage of AMM over MM is that the former does not require a (conditional) “final reduction” step. This allows using the output of one invocation as the input to a subsequent invocation. The relation between AMM and MM is the following. If 0 ≤ a, b < B, RR = 22t (mod k) ,a′ = AMM(a, RR), b′ = AMM(b, RR) , u′ = AMM(a′, b′) and u = AMM(u′, 1), then u = a ⋅ b (mod k).

3 Implementing AMS with AVX512IFMA

One of the common squaring algorithms [4] is the following. Let \(A=\sum _{i=0}^{n}{B^i a_i}\) be an n digits integer in base B, a i ≥ 0. Then,

$$\displaystyle \begin{aligned} \begin{aligned} A^2 &= \sum_{i=0}^{n}\sum_{j=0}^{n}{B^{i+j} a_i a_j} \\ & = \sum_{i=0}^{n}{B^{2i} a_i^2} + 2\sum_{i=0}^{n}\sum_{j=i+1}^{n}{B^{i+j} a_i a_j} \end{aligned} \end{aligned} $$
(1.1)

where the last multiplication by 2 can be carried out by a series of left shift operations [9]. This reduces about half of the single-precision multiplications (compared toregular multiplication). Additional improvement is achieved by using vectorization. For example, [8] shows squaring implementations that use Intel’s Advanced Vector Extensions (AVX) and AVX2 instructions. In these implementations, integers are stored in a “redundant representation” with radix B = 228 (each of the n digits is placed in a 32-bit container, padded from above with 4 zero bits). Each of the AVX2 256-bit wide registers (ymm) can hold up to eight 32-bit containers. This allows for (left) shifting of 8 digits in parallel, without losing their carry bit.

Algorithm 1.2 x = AMS52(a, m, k 0)

Algorithm 1.2 describes an implementation of AMS= AMM(a,a) that uses the AVX512IFMA instructions. Let the input (a), the modulus (m) and the result (x) be n-digit integers in radix B = 252, where each digit is placed in a 64-bit container (padded with 12 zero bits from above). Let \(z=\left \lceil n/8 \right \rceil \) be the total number of wide registers needed for holding an n-digit number, and denote k 0 = −m −1 (mod 252). The final step of Algorithm 1.2 returns the result to the radix B = 252 format, by rearranging the carry bits. An illustration of a simple AMS flow is given in Fig. 1.1 that shows how ∼20% of the VPMADD52 calls (left as blank spaces in the figure) are saved, compared to an AMM. The algorithm applies the left shift optimization of [9] to the AVX512IFMA AMM implementation of [11]. This can be done through either Eq. 1.1 (perform all MAC calculations and then shift the result by one), or according to:

$$\displaystyle \begin{aligned} A^2 = \sum_{i=0}^{n}{B^{2i} a_i^2} + \sum_{i=0}^{n}\sum_{j=i+1}^{n}{B^{i+j} a_i a^{\prime}_j} \end{aligned} $$
(1.2)

where a′ = a ≪ 1. An efficient implementation of the first approach requires to accommodate a, m, and x in wide registers (not in memory), while an implementation of the second approach requires accommodating a′ in wide registers as well. Consequently, the AVX512, which has only 32 wide registers, can hold n-digit integers up to n ≤ 85 with the first approach, or up to n ≤ 64 with the second approach. For example, 4096-bit modular squaring (part of a 4096-bit exponentiation, e.g., for Paillier encryption) has n = 80-digits operands (written in radix B = 252). It requires 40 wide registers with the second approach (but there are not enough). With the first approach, only 30 wide registers are needed (there are 32). This situation seems better, but in practice, it is not good enough.

Fig. 1.1
figure 1

Flow illustration of x = SQR(a, m, k 0), where a, m and x are 16-digit operands, each one is accommodated in two zmm registers

Performing left shifting of an n-digit number requires some extra wide registers. These are not necessarily available for use with the above two approaches. Thus, we propose Algorithm 1.2, that is based on the following identity:

$$\displaystyle \begin{aligned} A^2 = \sum_{i=0}^{n}{B^{2i} a_i^2} + \sum_{i=0}^{n}\sum_{j=i+1}^{n}{2 (B^{i+j} a_i a_j)} \end{aligned} $$
(1.3)

Here, the left shifts are performed on-the-fly, and free some wide registers for supporting other operations.

Identifying an additional bottleneck

On-the-fly left shifting can be implemented in three ways, but unfortunately, all three do not go along well with the AVX512IFMA architecture. The first alternative is to multiply, accumulate and shift the result. This may double shift some of the previously accumulated data. The second alternative is to shift one of the VPMADD52’s input operands. This may lead to a set carry bit in position 53, which would be (erroneously) ignored during the multiplication (see Algorithm 1.1). The third alternative splits the MAC operation, to inject the shift between. This is not feasible with the atomic operation of VPMADD52, but can be resolved by performing the Multiply-Shift-Accumulate operation in two steps, with an extra temporary (zeroed) wide register. Indeed, Algorithm 1.2, MulA[L/H]Part (steps 4, 5) executes this flow.

4 Is Using Radix 251 Better?

In this section, we discuss the selection of the radix. The AVX512IFMA instructions leverage hardware that is needed anyhow, for the FMA unit (floating-point operations need 53-bit multiplication for a 106-bit mantissa). Obviously, given AVX512IFMA, it is natural to work with radix B = 252. Using a larger radix (e.g., B = 258) could be better in theory, but will incur too many costly conversions to allow for using VPMADD52. We also note that no native SIMD instructions for a larger radix are available. A smaller radix (e.g., 251) is, however, possible to choose. This allows to cut about half of the serialized instructions in steps 3–5 of MulA[LH]Part, by left shifting one of the operands before the multiplication.

Algorithm 1.3 is a modification of Algorithm 1.2, operating in radix 251. While it avoids the shift operations before the VPMADD52LUQ, it still needs to perform the shifting before the VPMADD52HUQ instruction. For example, Let a, b, c 1, c 2 be 51-bit integers. After performing c 1 = VPMADD52LUQ(0, a, b) = (a × b)[51 : 0] and c 2 = VPMADD52HUQ(0, a, b) = (a × b)[102 : 52], c 1 and c 2 are no longer in (pure) radix 251. Propagating the carry bit in c 1 can be delayed to step 22 of Algorithm 1.3. In contrary, c 2 must be shifted prior to the accumulation step. As we show in Sect. 1.5, Algorithm 1.3 does not lead to faster squaring, with the current architecture. This suggests a possible improvement to the architectural definition.

4.1 A Possible Improvement for AVX512IFMA

Algorithm 1.3 offers better parallelization compared to Algorithm 1.2, but still includes serialized steps (e.g., the function MulHighPart). A completely parallelized algorithm requires hardware support. To this end, we suggest a new instruction that we call Fused Multiply-Shift-Add (FMSA), and describe in Algorithm 1.4. It shifts the multiplication result by an immediate value (imm8) before accumulating it. This instruction can be based on the same hardware that supports FMA (just as AVX512IFMA). Note that when imm8 = 0 then this instruction is exactly VPMADD52HUQ.

Algorithm 1.3 x = AMS51(a, m, k 0)

Algorithm 1.4 DST=FMSA(A,B,C,imm8)

5 Results

5.1 Results for the Current Architecture

This section provides our performance results. For this study, we wrote new optimized code for all the algorithms discussed above, and measured them with the following methodology.

Currently, there are only a limited series of processors with VPMADD52, which we currently don’t have. Therefore, to predict the potential improvement on future Intel architectures we used the Intel Software Developer Emulator (SDE) [3]. This tool allows us to count the number of instructions executed during each of the tested functions. We marked the start/end boundaries of each function with “SSC marks” 1 and 2, respectively. This is done by executing “movl ssc_mark, % ebx; .byte 0x64, 0x67, 0x90” and invoking the SDE with the flags “-start_ssc_mark 1 -stop_ssc_mark 2 -mix -cnl”. The rationale is that a reduced number of instructions typically indicates improved performance that will be observed on a real processor (although the exact relation between the instructions count and the eventual cycles count is not known in advanced).

Our measurements show that the overall number of instructions in our AMM and AMS implementations (in radix 252) is almost identical. However, the number of occurrences per instruction varies between the two algorithms. The most noticeable change was for the VPADDQ, VPMADD52, VPSLLQ, and VPXORQ instructions. Let u AMS/u AMM be the number of occurrences of the instruction u in AMS/AMM code, and let t AMM be the total number of instructions in the AMM code. We write r u = (u AMS − u AMM)∕t AMM. Table 1.1 compares the r u values for different u and operands sizes. It shows that reducing the number of VPMADD52 instructions is achieved through increasing the number of other instruction (e.g., VPADDQ, VPSLLQ, and VPXORQ).

Table 1.1 Values of r u for different u instructions and different operands sizes

To assess the impact of the above trade-off, we note that the latency of VPADDQ, VPSLLQ, and VPXORQ is 1 cycle, the throughput of VPADDQ and VPXORQ is 0.33 cycles, and the throughput of VPSLLQ is 1 cycle [2]. By comparison, we can expect that the latency/throughput of a future VPMADD52 would be similar to VPMADDWD (i.e., 5/1), or to VFMA* (i.e., 4/0.5). It appears that trading one VPMADD52 for 4 other instructions (which is worse than the trade-off we have to our AMS implementation) could still be faster than the AMM implementation.

To study the effects at the higher scale of the modular exponentiation code, we define the following notation. Let u ModExpAMS/u ModExp be the number of occurrences of the instruction u in the modular exponentiation code, with and without AMS, respectively, and let t ModExp be the overall number of instructions in this code (w/o AMS). We write s u = (u ModExpAMS − u ModExp)∕t ModExp. Table 1.2 shows the values s u.

Table 1.2 Values of s u for different instructions (u) and different operands sizes

We use the following notation for evaluating the radix 251 technique. Let u AMS51/u AMM51/u ModExpAMS51 be the number of occurrences of the instruction u in radix 251 code. We write

$$\displaystyle \begin{aligned} w_u^{AMM} & = (u_{AMM} - u_{AMM51}) / t_{AMM} \\ w_u^{AMS} & = (u_{AMS} - u_{AMS51}) / t_{AMS} \\ w_u^{ModExp} & = (u_{ModExpAMS} - u_{ModExpAMS51}) / t_{ModExp}\end{aligned} $$

Table 1.3 shows the values \(w_{u}^{AMM}\), \(w_{u}^{AMS}\), and \(w_{u}^{ModExp}\). Here, we see that the number of VPMADD52 instructions is almost unchanged, but the number of VPADDQ, VPXORQ, and VPSLLQ was increased. Therefore, we predict that implementations with operands in radix 251 will be slower than those in radix 252.

Table 1.3 Values of \(w_{u}^{AMM}\), \(w_{u}^{AMS}\), and \(w_{u}^{ModExp}\) for different instructions (u) and different operands sizes

5.2 A “what if” Question: The Potential of FMSA

Table 1.4 is similar to Table 1.3, where we replace the instructions in the MulHighPart with only one VPMADD52HUQ instruction, emulating our new FMSA instruction. Here, the added number of VPADDQ, VPSLLQ, and VPXORQ instructions is no longer needed, and the full power of our AMS can be seen.

Table 1.4 Values of \(w_{u}^{AMM}\), \(w_{u}^{AMS}\), and \(w_{u}^{ModExp}\), when using the FMSA instruction, for different instructions (u) and different operands sizes

6 Conclusion

This paper showed a method to use Intel’s new AVX512IFMA instructions, for optimizing software that computes AMS on modern processor architectures. Section 1.5 motivates our prediction that the proposed implementation would further improve the implementations of modular exponentiation described in [7]. As a future research we are aiming on measuring our code on a real processor once it will be widely available.

In addition, we analyzed the hypothetical benefit of using a different radix: 251 instead of 252. This can significantly improve the AMS algorithm (only) if a new instruction, which we call FMSA, is also added to the architecture. We note that FMSA requires only a small tweak over the current AVX512IFMA, and no new hardware.