1 Introduction

In 2007, Bernstein and Lange [7] introduced Edwards curves to cryptography. Their study and subsequent works in [2, 8, 9, 16] showed that the addition law on such curves is more efficient than all previously known formulas. Edwards curves have thus attracted great interest in applications that require elliptic curve operations to achieve faster arithmetic. Subsequently, the application of Edwards curves to pairing-based cryptography has been studied in several research papers [1, 13, 17, 20, 24]. Although, pairing computation on Edwards curves is slightly slower than on Weierstrass curves so far. However, in many pairing-based cryptosystems, the most time-consuming operation is still to compute scalar multiples aP of a point P, and it thus may be advantageous to use Edwards curves in these applications.

Pairing (or bilinear map) is probably the most useful cryptographic tool in the 2000s. It was first introduced to cryptography in Joux’s seminal paper [18] in 2000 that describes a tripartite (bilinear) Diffie–Hellman key exchange. Then, the use of cryptosystems based on pairings has had a huge success with some notable breakthroughs such as the first practical identity-based encryption scheme [4].

Miller’s algorithm [21, 22] plays an important role in pairing computations on elliptic curves. Many papers are devoted to improvements in its efficiency. For example, using specific pairing-friendly elliptic curves to speed up Miller’s algorithm [5, 10, 12]. Another approach of improving Miller’s algorithm is to reduce the Miller-loop length by introducing variants of Tate pairings, for example Eta pairing [6], Ate pairing [15], and particular optimal pairings [14, 23]. For a more generic approach, studies in [3, 11, 19] improved the performance for computing pairings of any type (i.e., Weil, Tate, optimal pairings), and on generic pairing-friendly elliptic curves.

Basically, Miller’s algorithm is based on a rational function g of three points \(P_1, P_2, P_3\). This function is called Miller function and has its divisor \(div(g) = (P_1) + (P_2) - (P_3) - (\mathscr {O})\), where \(\mathscr {O}\) is a distinguished point (i.e. the neutral element of the group law). For curves of Weierstrass form, this function is defined to be the line passing through points \(P_1\) and \(P_2\) divided by the vertical line passing through the point \(P_3\), where \(P_3= [P_1 + P_2]\). On Edwards curves, finding such a point \(P_3\) is not straightforward as in Weierstrass curves because Edwards equation has degree 4, i.e. any line has 4 intersections with the curves instead of 3 on Weierstrass curves.

In [1], Arene et al. presented the first geometric interpretation of the group law on Edwards curves and showed how to compute Tate pairing on twisted Edwards curves by using a conic of degree 2. They also introduced explicit formulas with a focus on curves having an even embedding degree Footnote 1. Then, Xu and Lin [24] proposed refinements to Miller’s algorithm that sped up the pairing computation on generic Edwards curves, i.e. curves with arbitrary embedding degree. The cost of their refinements is about 76.8 % of that of the original Miller’s algorithm on Edwards curves.

In this paper, we further extend Xu–Lin’s works and propose new refinements to Miller’s algorithm. Similar to Xu–Lin’s refinements, our approach is generic. Although this approach did not bring a dramatic efficiency as that of Arene et al. [1] when computing Tate pairing over Edwards curves with even embedded degrees, the proposed refinements can be used to compute pairings of any type over pairing-friendly Edwards curves with any embedding degree. This approach is of particular interest to compute optimal pairings [14, 23], and in situations where the denominator elimination technique using a twist is not possible (e.g., Edwards curves with odd embedding degrees).Footnote 2 We also analyze and show that our new algorithm is faster than the original Miller algorithm and Xu–Lin’s refinements. When the embedding degree k gets larger, the time required to perform operations in the full extension field dominates that in the base field, then our algorithm is faster by approximately 25 % than Xu–Lin’s algorithm.

The paper is organized as follows. In Sect. 2, we briefly recall some background on pairing computation on Edwards curves, and then Xu–Lin’s refinements. In Sect. 3, we present our refinements of Miller’s algorithm. Section 4 gives some discussion on performance of the proposed algorithms. Section 5 is our conclusion.

2 Preliminaries

2.1 Pairings on Edwards curves

Let \(\mathbb {F}_p\) be a prime finite field, where p is a prime different from 2. A twisted Edwards curve \(E_{a, d}\) defined over \(\mathbb {F}_p\) is the set of solutions (xy) of the following affine equation:

$$\begin{aligned} E_{a, d} : ax^2 + y^2 = 1 + dx^2y^2, \end{aligned}$$
(1)

where \(a, d \in \mathbb {F}_p^*\), and \(a \ne d\). Let \(P_1 = (x_1, y_1)\) be a point on \(E_{a, d}\). The negative of P is \(-P = (-x_1, y_1)\). The point \(\mathscr {O}= (0, 1)\) is the neutral element of the addition law. The point \(\mathscr {O}' = (0,-1)\) has order 2. The points at infinity \(\varOmega _1 = (1 : 0 : 0)\) and \(\varOmega _2 = (0 : 1 : 0)\) are singular. The studies in [2, 9, 16] showed that twisted Edwards curves provide the fastest doubling and addition operations in elliptic curve cryptography.

The cryptographic pairing is a bilinear map that takes two points on elliptic curves defined over finite fields and returns a value in extension finite field. Let r be a prime number different from p and \(r | \#E_{a, d}(\mathbb {F}_p)\), where \(\#E_{a, d}(\mathbb {F}_p)\) is the number of points on the Edwards curve \(E_{a, d}\). Let k be the embedding degree of the elliptic curve \(E_{a, d}\) with respect to r, and let \(E_{a, d}[r]\) denote the subgroup of points of order r on the Edwards curves \(E_{a, d}\). A cryptographic pairing is defined over an Edwards curves \(E_{a, d}\) as:

$$\begin{aligned} e : E_{a, d}[r] \times E_{a, d}[r] \rightarrow \mathbb {F}_{p^k}^* \end{aligned}$$

The key to the definition of pairings is the evaluation of rational functions in divisors. In this section, we briefly recall Miller’s algorithm [22] that is so far the best known method to compute pairings over (hyper-)elliptic curves. Readers who want to know more about pairings can refer to papers [22, 23]. Let \(P, Q \in E_{a, d}\) be two points of order r. The main part of Miller’s algorithm is to construct the rational function \(f_{r,P}\) and evaluating \(f_{r,P}(Q)\) with \(div(f_{r,P}) = r(P) - (rP) - [r - 1](\mathcal {O})\).

Let m and n be two integers, and \(g_{mP,nP}\) be a rational function, so-called Miller function, whose divisor \(div(g_{mP,nP}) = (mP) + (nP) - ([m + n]P) - (\mathscr {O})\). Miller’s algorithm is based on the following lemma.

Lemma 1

(Lemma 2, [22]) For n and m two integers, up to a multiplicative constant, we have

$$\begin{aligned} f_{m + n, P}=f_{m, P} f_{n, P} g_{mP, nP}. \end{aligned}$$
(2)

Equation (2) is called Miller relation, which is proved by considering divisors. Miller’s algorithm makes use of Lemma 1 with \(m = n\) in a doubling step and \(n = 1\) in an addition step. For Edwards curves, Arene et al. [1] defined the Miller function in the following theorem.

Theorem 1

(Theorem 2, [1]) Let \(a, d \in \mathbb {F}_p^*, \; a \ne d\) and \(E_{a, d}\) be a twisted Edwards curve over \(\mathbb {F}_p\). Let \(P_1, \; P_2 \in E_{a, d}(\mathbb {F}_p)\). Define \(P_3 = P_1 + P_2\). Let \(\phi _{P_1, P_2}\) be the equation of the conic \({\mathscr {C}}\) passing through \(P_1, P_2, -P_3, \varOmega _1, \varOmega _2, \mathscr {O}'\) whose divisor is \((P_1) + (P_2) + (-P_3) + (\mathscr {O}') - 2(\varOmega _1) -2(\varOmega _2)\). Let \(\ell _{1, P_3}\) be the horizontal line going through \(P_3\) whose divisor is \(div(\ell _{1, P_3}) = (P_3) + (-P_3) - 2(\varOmega _2)\), and \(\ell _{2, \mathscr {O}}\) is the vertical line going through \(\mathscr {O}\) whose divisor is \((\mathscr {O}) + (\mathscr {O}') - 2(\varOmega _1)\). Then we have

$$\begin{aligned} div\left( \frac{\phi _{P_1, P_2}}{\ell _{1, P_3}\ell _{2, \mathscr {O}}} \right) \sim (P_1) + (P_2) - (P_3) - (\mathscr {O}). \end{aligned}$$
(3)

The rational function \(g_{P_1, P_2} = \frac{\phi _{P_1, P_2}}{\ell _{1, P_3} \ell _{2, \mathscr {O}}}\) consists of three terms and can be thus considered as Miller function on Edwards curves. Miller’s algorithm for Edwards curves using this function works as in Algorithm 1.

figure a

2.2 Xu–Lin’s refinements

For simplicity, we make use of the notation \(\phi _{P, P}\) (resp. \(\ell _{2,[2]P}\), and \(\ell _{1, \mathscr {O}}\)) replacing for \(\phi _{P, P}(Q)\) (resp. \(\ell _{2,[2]P}(Q)\), and \(\ell _{1, \mathscr {O}}(Q)\)). By extending the Blake et al.’s method [11] to Edwards curves, Xu and Lin [24] presented a refinement of Miller’s algorithm. Their algorithm was derived from the following theorem.

Theorem 2

(Theorem 1, [24]) Let \(E_{a, d}\) be a twisted Edwards curve over \(\mathbb {F}_p\) and \(P, \; R \in E_{a, d}\) be two points of order r. Then,

  1. 1.
    $$\begin{aligned} \left( \frac{\phi _{R, R}}{\ell _{1,[2]R}\ell _{2, \mathscr {O}}}\right) ^2 \frac{\phi _{[2]R, [2]R}}{\ell _{1,[4]R}\ell _{2, \mathscr {O}}} = \frac{\phi _{R, R}^2}{\phi _{[-2]R,[-2]R}\phi _{\mathscr {O}, \mathscr {O}}}; \end{aligned}$$
  2. 2.
    $$\begin{aligned} \frac{\phi _{R, R}}{\ell _{1,[2]R}\ell _{2, \mathscr {O}}} \frac{\phi _{[2]R, P}}{\ell _{1,[2]R + P}\ell _{2, \mathscr {O}}} = \frac{\phi _{R, R} \ell _{1, P}}{\phi _{[2]R + P,-P}\ell _{2, \mathscr {O}}}. \end{aligned}$$

The above theorem was proven by calculating divisors (see [24] for more details). From this theorem, they described a variant of Miller’s algorithm in radix-4 [24, Algorithm 3]. They also claimed that the total cost of the proposed algorithm is about 76.8 % of that of the original Miller algorithm. In the following section, we will present our refinements that are approximately 25 % faster than Xu–Lin’s refinements.

3 Our improvements on Miller’s algorithm

3.1 First improvement

We first present a new rational function whose divisor is equivalent to Miller function (Eq. (3)) presented in [1].

Definition 1

Let \(E_{a,d}\) be a twisted Edwards curve and \(R, \; P \in E_{a,d}\). Let \(\phi _{R, P}\) be a conic passing through R and P, and let \(\phi _{R + P, -[R + P]}\) a conic passing through \(R + P\) and \(-[R + P]\). Then we define

$$\begin{aligned} g_{R, P} = \frac{\phi _{R, P}}{\phi _{R + P, -[R + P]}}. \end{aligned}$$
(4)

Lemma 2

We have

$$\begin{aligned} div(g_{R, P}) = (R) + (P) - ([R + P]) - \mathscr {O}. \end{aligned}$$

Proof

By calculating divisors, it is straightforward to see that

$$\begin{aligned} \hbox {div}(g_{R, P})= & {} (R) + (P) + (-[R + P]) + (\mathscr {O}') - 2(\varOmega _1) -2(\varOmega _2) \\&- ([R + P]) - (-[R + P]) - (\mathscr {O}) - (\mathscr {O}') + 2(\varOmega _1) + 2(\varOmega _2) \\= & {} (R) + (P) - ([R + P]) - (\mathscr {O}) \;. \\ \end{aligned}$$

which concludes the proof. \(\square \)

Recall that the Miller function is defined as \(g_{R, P} = \frac{\ell _{R, P}}{\upsilon _{R + P}}\) on Weierstrass curves, where \(\ell _{R, P}, \upsilon _{R + P}\) are lines passing through RP and \(R + P, -[R + P]\), respectively (see [22]). Eq. (4) thus looks like the Miller function on Weierstrass curves if a conic plays role of a line function. Notice that during pairing computation, R is multiple of the inputted point P. By applying Eq. (4), a variant of Miller’s algorithm is described in Algorithm 2.

figure b

Remark 1

As the original Miller algorithm, our algorithm cannot avoid divisions needed to update f. But we can reduce them easily to one inversion at the end of the addition chain (for the cost of one squaring, and one extra multiplication if \(r_i = 1\) per iteration of the algorithm).

Algorithm 2 always performs one doubling step per iteration and one addition step if \(r_i = 1\). To update functions \(f, \; g\), the doubling step requires 2 squarings and 2 multiplications in the full extension field \(\mathbb {F}_{p^k}\) and the addition step requires 2 multiplications. It is obvious to see that Algorithm 2 requires only one multiplication for updating the function g in doubling/addition steps instead of two multiplications as in Algorithm 1. Notice that this multiplication is costly because it is performed in the full extension field. In Sect. 4, we will provide a detailed analysis on the performance of these algorithms. In the following section, we introduce a further refinement that even offers a better performance in comparison with Algorithm 2.

3.2 Further refinement

The new refinement is inspired by the following lemmas.

Lemma 3

We have

$$\begin{aligned} g_{R, P} = \frac{\phi _{R, -R}\cdot \phi _{P, -P}}{\phi _{-R, -P}\cdot \phi _{\mathscr {O},\mathscr {O}}}. \end{aligned}$$
(5)

Proof

Again, this lemma is proved by considering divisors. Indeed,

$$\begin{aligned} \begin{array}{rcl} \hbox {div}\left( \frac{\phi _{R, -R}\cdot \phi _{P, -P}}{\phi _{-R, -P}\cdot \phi _{\mathscr {O},\mathscr {O}}}\right) &{} = &{} (R) + (-R) + (\mathscr {O}) + (\mathscr {O}') - 2(\varOmega _1) -2(\varOmega _2)\\ &{} &{} + (P) + (-P) + (\mathscr {O}) + (\mathscr {O}') - 2(\varOmega _1) -2(\varOmega _2)\\ &{} &{} - (-R) - (-P) - ([R + P]) - (\mathscr {O}') + 2(\varOmega _1) + 2(\varOmega _2)\\ &{} &{} - 3(\mathscr {O}) - (\mathscr {O}') + 2(\varOmega _1) + 2(\varOmega _2)\\ &{} = &{} (R) + (P) - ([R + P]) - (\mathscr {O}) \\ &{} = &{} \hbox {div}(g_{R, P}) \;. \\ \end{array} \end{aligned}$$

which concludes the proof. \(\square \)

Lemma 4

Let \(R \in E_{a, d}\) be a point of order r, we have

$$\begin{aligned} \frac{\phi _{R, R}}{\phi _{R, -R}^2 \cdot \phi _{2R, -2R}} = \frac{1}{\phi _{-R, -R}\cdot \phi _{\mathscr {O}, \mathscr {O}}}. \end{aligned}$$
(6)

Proof

This lemma is easy to be proven using Definition 1 and Lemma 3 by replacing P by R. By Definition 1, we have

$$\begin{aligned} g_{R, R} = \frac{\phi _{R, R}}{\phi _{2R, -2R}}, \end{aligned}$$

and by Lemma 3, we have

$$\begin{aligned} g_{R, R} = \frac{\phi _{R, -R}\cdot \phi _{R, -R}}{\phi _{-R, -R}\cdot \phi _{\mathscr {O},\mathscr {O}}}. \end{aligned}$$

Thus, we obtain

$$\begin{aligned} \frac{\phi _{R, R}}{\phi _{R, -R}^2 \cdot \phi _{2R, -2R}} = \frac{1}{\phi _{-R, -R}\cdot \phi _{\mathscr {O},\mathscr {O}}} \end{aligned}$$

\(\square \)

Remark 2

The right-hand side of Eq. (6) can be further simplified. Since Q is a fixed point during pairing computation, the conic function \(\phi _{\mathscr {O}, \mathscr {O}}\) can be precomputed and integrated into \(\phi _{-R, -R}\) as follows:

$$\begin{aligned} \phi '_{-R, -R}(Q)&= \phi _{-R, -R}(Q) \cdot \phi _{\mathscr {O}, \mathscr {O}}(Q) \nonumber \\&= (c_{Z^2}(Z_Q^2 + Y_Q Z_Q) + c_{XY} X_Q Y_Q + c_{XZ} X_Q Z_Q)(X_Q(Z_Q - Y_Q)) \nonumber \\&= c_{Z^2}\eta _1 + c_{XY} \eta _2 + c_{XZ} \eta _3, \end{aligned}$$

where \(\eta _1 = (Z_Q^2 + Y_Q Z_Q)(X_Q(Z_Q - Y_Q))\), \(\eta _2 = X^2_Q Y_Q(Z_Q - Y_Q)\), \(\eta _3 = X^2_Q Z_Q(Z_Q - Y_Q)\). Because Q is fixed for the whole computation, the values \(\eta _1\), \(\eta _2\), and \(\eta _3\) are also fixed. Hence, these values can be precomputed and stored. Eq. (6) can thus be rewritten as follows:

$$\begin{aligned} \frac{\phi _{R, R}}{\phi _{R, -R}^2 \cdot \phi _{2R, -2R}} = \frac{1}{\phi '_{-R, -R}}. \end{aligned}$$
(7)

The right-hand side of Eq. (7) has only one conic function while the left-hand side contains 3 conic functions. We aim to use this observation to reduce \(\phi _{R, -R}\) in Algorithm 2, where R is multiple of P. To do so, we apply the following simple strategy. Whenever possible, we delay \(\phi _{R, -R}\) in the denominator for the next iteration. This conic function will be squared in the next doubling step, then by applying Eq. (7) one can cancel out \(\phi ^2_{R, -R}\) and \(\phi _{2R, -2R}\).

We introduce a variable m to determine whether there exists a delay of one conic function \(\phi _{R, -R}\) in the current iteration. If \(m = 1\), there is a delay from the previous iteration, otherwise, there is no delay. Let \((r_t r_{t - 1} \ldots r_0)_2\), where \(r_i \in \{0, 1\}\) be the binary representation of the order r. There are four cases to be considered as follows:

  1. 1.

    If the current bit \(r_i = 0\) and \(m = 0\). The proposed algorithm will perform a doubling step by applying Eq. (4), but keep delaying \(\phi _{2\,R, -2\, R}\) for the next iteration, i.e. m will be set to 1 after this iteration.

  2. 2.

    If the current bit \(r_i = 0\) and \(m = 1\). The proposed algorithm will perform a doubling step by applying Eq. (7). This allows to eliminate the delayed conic \(\phi ^2_{R, -R}\) and \(\phi _{2\,R, -2\, R}\). The value of m will be set to 0. This is only case in the proposed algorithm there is no delay for the next iteration.

  3. 3.

    If the current bit \(r_i = 1\) and \(m = 1\). The proposed algorithm will apply Eq. (7) in the doubling step and Eq. (4) in the addition step. It also keeps delaying \(\phi _{2\,R + P, -(2\, R + P)}\) for the next iteration.

  4. 4.

    If the current bit \(r_i = 1\) and \(m = 0\). This is the most costly case of the proposed algorithm. The proposed algorithm will apply Eq. (4) in both doubling and addition steps. Likewise, it keeps delaying \(\phi _{2\,R + P, -(2\, R + P)}\) for the next iteration.

Let \(f_{r,P}^{(i)}\) be the value of the rational function \(f_{r,P}\) when processing at the bit \(r_i\) of the order r. Formally, we define the following function \(f_{r,P}\) by applying Eq. (7) as follows:

$$\begin{aligned} f^{(i)}_{r,P} = {\left\{ \begin{array}{ll} (f^{(i + 1)}_{r,P})^2 \phi _{R, R} &{} \quad \text {if }r_i = 0 \hbox { and } m = 0 \\ (f^{(i + 1)}_{r,P})^2 \frac{1}{\phi '_{-R, -R}} &{} \quad \text {if }r_i = 0 \hbox { and } m = 1 \\ (f^{(i + 1)}_{r,P})^2 \frac{\phi _{2\,R, P}}{\phi '_{-R, -R}} &{} \quad \text {if }r_i = 1 \hbox { and } m = 1\\ (f^{(i + 1)}_{r,P})^2 \frac{\phi _{R, R} \cdot \phi _{2 \, R, P}}{\phi _{2\,R, -2\, R}} &{} \quad \text {if }r_i = 1 \hbox { and } m = 0, \end{array}\right. } \end{aligned}$$

where \(0 \le i < t\), \(f^{(t)}_{r,P} = 1\), and R is a multiple of the input point P at the current iteration, i.e. \(R = [(r_t \ldots r_{i + 1})_2]P\). The proposed algorithm is described by the pseudo-code in Algorithm 3.

figure c

3.3 Examples

We give some examples in this section. We list the calculation formulas of \(f_{r,P}\) for \(r = 5, 17\) and 23 using Miller’s algorithm from [1, 24] (Algorithm 1), our improved version 1 (Algorithm 2) and our improved version 2 (Algorithm 3). Here the symbol \(\phi _{mP,nP}, \, \phi '_{mP,nP}, \, \ell _{1, nP}\), \(\ell _{2, nP}\) and \(f_{r,P}\) are shortened to \(\phi _{m,n}, \, \phi '_{m,n}, \, \ell _{1, n}\), \(\ell _{2, n}\) and \(f_r\) respectively, where \(m, n \in \mathbb {Z}\).

Compute \(f_5\):

Number

\((101)_2 = 5\)

Algorithm 1

\(f_5 = \frac{\phi ^2_{1, 1}}{\ell ^2_{1, 2} \cdot \ell ^2_{2, \mathscr {O}}} \frac{\phi _{2, 2}}{\ell _{1, 4} \cdot \ell _{2, \mathscr {O}}}\frac{\phi _{4, 1}}{\ell _{1, 5} \cdot \ell _{2, \mathscr {O}}}\)

Algorithm 2

\(f_5 = \frac{\phi ^2_{1, 1}}{\phi ^2_{2, -2}} \frac{\phi _{2, 2}}{\phi _{4, -4}}\frac{\phi _{4, 1}}{\phi _{5, -5}}\)

Algorithm 3

\(f_5 = \phi ^2_{1, 1} \frac{\phi _{4, 1}}{\phi '_{-2, -2}} \frac{1}{\phi _{5, -5}}\)

Compute \(f_{17}\):

Number

\((10001)_2 = 17\)

Algorithm 1

\(f_{17} = \frac{\phi ^8_{1, 1}}{\ell ^8_{1, 2} \cdot \ell ^8_{2, \mathscr {O}}} \frac{\phi ^4_{2, 2}}{\ell ^4_{1, 4} \cdot \ell ^4_{2, \mathscr {O}}} \frac{\phi ^2_{4, 4}}{\ell ^2_{1, 8} \cdot \ell ^2_{2, \mathscr {O}}} \frac{\phi _{8, 8}}{\ell _{1, 16} \cdot \ell _{2, \mathscr {O}}} \frac{\phi _{16, 1}}{\ell _{1, 17} \cdot \ell _{2, \mathscr {O}}}\)

Algorithm 2

\(f_{17} = \frac{\phi ^8_{1, 1}}{\phi ^8_{2, -2}} \frac{\phi ^4_{2, 2}}{\phi ^4_{4, -4}} \frac{\phi ^2_{4, 4}}{\phi ^2_{8, -8}} \frac{\phi _{8, 8}}{\phi _{16, -16}} \frac{\phi _{16, 1}}{\phi _{17, -17}}\)

Algorithm 3

\(f_{17} = \phi ^8_{1, 1} \frac{1}{(\phi '_{-2, -2})^4} \phi ^2_{4, 4} \frac{\phi _{16, 1}}{\phi '_{8, -8}} \frac{1}{\phi _{17, -17}}\)

Compute \(f_{23}\):

Number

\((10111)_2 = 23\)

Algorithm 1

\(f_{23} = \frac{\phi ^8_{1, 1}}{\ell ^8_{1, 2} \cdot \ell ^8_{2, \mathscr {O}}} \frac{\phi ^4_{2, 2}}{\ell ^4_{1, 4} \cdot \ell ^4_{2, \mathscr {O}}}\frac{\phi ^4_{4, 1}}{\ell ^4_{1, 5} \cdot \ell ^4_{2, \mathscr {O}}} \frac{\phi ^2_{5, 5}}{\ell ^2_{1, 10} \cdot \ell ^2_{2, \mathscr {O}}}\frac{\phi ^2_{10, 1}}{\ell ^2_{1, 11} \cdot \ell ^2_{2, \mathscr {O}}} \frac{\phi _{11, 11}}{\ell _{1, 22} \cdot \ell _{2, \mathscr {O}}} \frac{\phi _{22, 1}}{\ell _{1, 23} \cdot \ell _{2, \mathscr {O}}}\)

Algorithm 2

\(f_{23} = \frac{\phi ^8_{1, 1}}{\phi ^8_{2, -2}} \frac{\phi ^4_{2, 2}}{\phi ^4_{4, -4}}\frac{\phi ^4_{4, 1}}{\phi ^4_{5, -5}} \frac{\phi ^2_{5, 5}}{\phi ^2_{10, -10}}\frac{\phi ^2_{10, 1}}{\phi ^2_{11, -11}} \frac{\phi _{11, 11}}{\phi _{22, -22}}\frac{\phi _{22, 1}}{\phi _{23, -23}}\)

Algorithm 3

\(f_{23} = \phi ^8_{1, 1} \frac{\phi ^4_{4, 1}}{(\phi '_{-2, -2})^4} \frac{\phi ^2_{10, 1}}{(\phi '_{-5, -5})^2} \frac{\phi _{22, 1}}{\phi '_{-11, -11}} \frac{1}{\phi _{23, -23}}\)

4 Performance discussion

In this section, we make a comparison between our refinements and the algorithms described in [1, 24]. Before analyzing the costs of algorithm, we introduce notations for the cost of field arithmetic operations. Let \(\mathbb {F}_{p^k}\) be an extension of degree k of \(\mathbb {F}_p\) for \(k \ge 1\) and let \(\mathbf{I}_{p^k}\), \(\mathbf{M}_{p^k}\), and \(\mathbf{S}_{p^k}\) be the costs for inversion, multiplication, and squaring in the field \(\mathbb {F}_{p^k}\), respectively.

The cost of the algorithms for pairing computation consists of three parts: the cost of updating the functions f, and g; the cost of updating the point R; and the cost of evaluating rational functions at some point Q. Arene et al. in [1, Section 5] analyzed in detail the total cost of updating the point R and coefficients \(c_{Z^2}, c_{XY}\), and \(c_{ZZ}\) of the conic. Without special treatment, this cost is the same for all algorithms.

The most costly operations in pairing computations are operations in the full extension field \(\mathbb {F}_{p^k}\). At high levels of security (i.e. k large), the complexity of operations in \(\mathbb {F}_{p^k}\) dominates the complexity of the operations that occur in the lower degree subfields. In this section, we only analyze the cost of updating the functions \(f,\, g\) which are generally executed on the full extension field \(\mathbb {F}_{p^k}\). Assume that the ratio of one squaring to one multiplication in the full extension field is set to be \(\mathbf{S}_{p^k} = 0.8 \mathbf{M}_{p^k}\) as commonly used value in the literature.

For a doubling (lines 1, 2), Algorithm 2 requires \(2\mathbf{M}_{p^k} + 2\mathbf{S}_{p^k}\), while Algorithm 3 requires only \(1\mathbf{M}_{p^k} + 2\mathbf{S}_{p^k}\) in order to update functions f and g. Likewise, an addition step requires \(2\mathbf{M}_{p^k}\) in Algorithm 2 (lines 3, 4), while this step requires only \(1\mathbf{M}_{p^k}\) by Algorithm 3.Footnote 3 Table 1 shows the number of operations needed in \(\mathbb {F}_{p^k}\) for updating fg in different algorithms.

Table 1 Comparison of the cost of updating fg of algorithms

Obviously, Algorithm 3 offers a better performance than Algorithm 2. From Table 1, for the generic case we can see that Algorithm 3 saves two full extension field multiplications when the bit \(r_i = 0\) as compared to the original Miller algorithm (Algorithm 1). When the bit \(r_i = 1\), Algorithm 3 saves two or three full extension field multiplications in comparison with Algorithm 1, depending on which case Algorithm 3 executes (i.e., line 3 or line 4).

In comparison with Arene et al.’s algorithm [1], Algorithm 3 requires one more squaring in the full extension field for each doubling step. However, as mentioned early Arene et al.’s algorithm can only be applied on Edwards curves with an even embedding degree k for Tate pairing computation, while our approach is generic. It can be applied to any (pairing-friendly) Edwards curve and for both the Weil and the Tate pairing.

Let \(r = \sum _{i = 0}^{l' - 1} q_i 4^i\), with \(q_i \in \{0, 1, 2, 3\}\) be the representation in radix-4 of the group order r. The refinements in [24] are described in radix 4. Their algorithm ([24, Algorithm3]) allows to eliminate some rational line functions from Eq. (3) during pairing computations. Their best results could be obtained by combining formula (1) and (2) of Theorem 2. In particular, they used the first formula for cases \(q_i = 0, 1, 2\), and the second formula in the case \(q_i = 3\). Table 2 compares Algorithm 3 and their algorithm. Because our algorithm works in radix 2, the number of operations in radix 4 could be counted by simply counting from Table 1. Notice that line 4 in Algorithm 3 (the most costly case in our algorithm) will never be performed in two consecutive iterations because the value of m will never be equal to 0 for two consecutive iterations (see Algorithm 3). Thus, when \(q_i = 3\) the worst case of Algorithm 3 happens as follows: the first bit “1” will be computed by using line 4 (cost \( 2\mathbf{S}_{p^k} + 3\mathbf{M}_{p^k}\)), and then value value of m is set to be 1; the second bit “1” will be computed by using line 3 (cost \( 2\mathbf{S}_{p^k} + 2\mathbf{M}_{p^k}\)).

Table 2 Comparison of our algorithm with the refinements in [24]

Table 2 shows that Algorithm 3 is obviously faster than the refinements of Miller’s algorithm in [24] for all four cases. Our algorithm saves two operations (resp., at least 3, or 5 operations) occurring in the full extension field \(\mathbb {F}_{p^k}\) for \(q_i = 0\) (resp., \(q_i = 1, 2\) or \(q_i = 3\)). When the embedding degree k gets larger, the complexity of these operations dominates the complexity of these operations occurring in \(\mathbb {F}_{p}\), then our algorithm is approximately 25 % faster than Xu–Lin’s algorithm.

5 Conclusion

In this paper, we proposed further refinements to Miller’s algorithm over Edwards curves. Our refinements significantly improved the performance of Miller’s algorithm in comparison with the original Miller algorithm and its previous refinements. Our method is generic, hence the proposed algorithm can be applied for computing pairings of any type over any pairing-friendly Edwards curve. This allows the use of Edwards curves with odd embedding degree, and is suitable for the computation of optimal pairings.