1 Introduction

A blind signature scheme is a digital signature scheme that allows the signature recipient to obtain a digital signature on a message of the recipient’s choice without revealing this message to the signer. The key feature of a blind signature protocol is that the resulting signature cannot be linked to a particular protocol run. If the recipient ran the protocol n times and, as a result, produced n signatures and provided them to the signer in a randomly permuted order, the signer would not be able to identify which signature corresponded to which protocol run any better than by guessing at random. Just as in a regular digital signature scheme, in order to verify a signature, a verifier (a third party, distinct from the signer or the signature recipient) runs a non-interactive verification algorithm.

Applications. Blind signatures were first introduced by David Chaum [13, 14]. The motivating application was untraceable electronic cash (ecash) [13, 15]: a bank can issue electronic coins by issuing blind signatures. A message represents a coin’s serial number, while the bank’s signature on it attests that it is indeed a valid coin. The fact that it was issued via a blind signing protocol means that one cannot trace which coin was issued to which user, and therefore cannot surmise how a particular user Alice spent her money.

Blind signatures protect a user’s privacy even while ensuring they are qualified for a particular transaction. For example, suppose that a user has convinced a server that he is a human (rather than a bot) by solving a CAPTCHA. Then the server may issue such a user a blind signature (or several blind signatures) that allow this user to convince other servers that he is a human and not a bot without needing to perform additional CAPTCHAs; however, even if all these servers compare transaction logs, they cannot tell which user it was. This simple scenario is of a growing importance in practice; for example, in is used in VPN by Google OneFootnote 1, Apple’s iCloud Private RelayFootnote 2 and Apple’s Safari browser proposal for privacy-preserving click measurementsFootnote 3.

Definitions. A blind signature scheme must satisfy correctness, blindness, and strong one-more unforgeability [1, 25, 28, 33]. Correctness means that an honest verifier will always accept a signature issued by an honest signer to an honest recipient; here, by “honest" we mean one that follows the prescribed algorithms. Blindness, as we explained above, means that the malicious signer learns nothing about a message during the signing protocol, and a signature cannot be linked to the specific protocol execution in which it was computed. This must hold even if the signer’s public key is chosen maliciously. Finally, strong one-more unforgeability means that, if an adversary acts as the recipient n times, then it cannot produce \(n+1\) distinct message-signature pairs better than with negligible probability. It is important that unforgeability hold even when the adversary engages in several sessions with the signer at the same time; i.e. it is important that unforgeability should hold in the concurrent setting.

Standardization. Blind signatures have been studied for almost forty years. They have well-understood definitions of security [1, 25, 28, 33]. Numerous constructions have also been proposed [2, 3, 5, 6, 11, 13, 19, 23, 24, 27, 28]. Finally, as we argued above, they are highly desirable in practice. Of course, even a well-understood cryptographic primitive should not get adopted for widespread use without undergoing a thorough standardization process through software standardization bodies such as the IETF.

The first proposed IETF standard for a blind signature scheme is the RSA-BSSA proposal by Denis, Jacobs and Wood [16, 17]. The scheme they proposed for standardization is, in a nutshell, the blind version of RSA-PSS [8, 9, 29, 30] along the lines proposed by Chaum [13, 14]. However, as the analysis in this paper makes clear, care must be taken to ensure that the message being signed comes from a high-entropy distribution; in the event that it doesn’t, a random salt value must be appended to it.

The key generation and verification algorithms are (essentially) the same as in RSA-PSS, except that, in case the message \(\texttt{msg}\) does not come from a high-entropy distribution, a salt value \(\texttt{rand}\) must be concatenated to the message \(\texttt{msg}\). More precisely, if \(\texttt{msg}\) does not come from a high-entropy distribution, this paper’s analysis recommends that the blind signing algorithm consist of three steps: first, on input a message \(\texttt{msg}\) and the RSA public key (Ne), the user chooses a random salt value \(\texttt{rand}\) and computes an RSA-PSS encoding m of \(\texttt{msg}\circ \texttt{rand}\) (where ‘\(\circ \)’ denotes concatenation), picks a blinding value r and sends the value \(z = mr^e \bmod N\) to the signer. Using his secret key d, the signer computes \(s = z^d \bmod N\) and sends it to the user, who derives the signature \(\sigma = s/r \bmod N\); it is easy to see that \(\sigma ^e = s^e/r^e = z/r^e = m \bmod N\), and thus, constitutes a valid RSA-PSS signature on the user’s message \(\texttt{msg}\circ \texttt{rand}\). In case \(\texttt{msg}\) comes from a high-entropy distribution, \(\texttt{rand}\) is not needed, and m is computed as a PSS encoding of \(\texttt{msg}\); the rest of the signing algorithm is the same. As we will see in Sect. 4, either the high entropy of \(\texttt{msg}\), or the additional salt value \(\texttt{rand}\) are necessary to ensure that the scheme is provably blind in the event that the signer’s key was chosen maliciously. This has resulted in IETF discussions on amending the draftFootnote 4.

As pointed out by Denis, Jacobs and Wood [16, 17], the message-response (i.e., two-move) structure of this protocol makes it desirable. The security game for strong one-more unforgeability for a two-move protocol is the same whether in the sequential or the concurrent setting. In contrast, a recent result [10] gave an attack on popular three-move blind signature protocols (such as the blind version of the Schnorr signature [28, 31, 32] or anonymous credentials light [4]) in the concurrent setting, making them poor candidates for standardization. Moreover, a three-message (or more) protocol would require the signer to keep state, which is a significant complication when it comes to system design, making the concurrently secure blind signature schemes of Abe [2] and Tessaro and Zhu [35] less suitable in practice.

The choice of blind RSA-PSS over blind RSA-FDH [6] is motivated by the popularity of (non-blind) RSA-PSS, ensuring that, at least as far as verifying the signatures is concerned, no new software need be developed. That way, even unsophisticated participants have easy access to the digital tools they need to take advantage of the privacy-preserving features offered by blind signatures.

Why standardize and adopt an RSA-based scheme now, instead of a post-quantum one? Indeed it is possible that, with the advent of quantum computing, decades from now another scheme will have to replace this RSA-based one. Yet, this will have no consequences on today’s clients and servers if the users’ privacy is protected even from quantum computers (for example, if it holds unconditionally). The consequences to the servers are minimized because a blind signature ceases to be relevant after a relatively brief amount of time, so the lifetime of a signing key would be measured in weeks rather than years.

This paper’s contributions and organization. We show that the proposed RSA-BSSA standard [16] constitutes a one-more unforgeable blind signature scheme. One-more unforgeability holds in the random-oracle model under the one-more-RSA assumption introduced by Bellare, Namprempre, Pointcheval and Semanko (BNPS) [6]. Blindness of the RSA-BSSA holds in the random-oracle model.

We also show that Chaum-BNPS’s blind RSA-FDH [6, 14] is not blind in the malicious-signer model, i.e., it can only be shown to be blind if the signer’s key pair is generated honestly (see Sect. 4.4). However, we show in Sect. 4.4 that even in the case of a malicious signer, it satisfies the weaker notion of a blind token which we introduce in Sect. 2.3.

The rest of this paper is organized as follows: In Sect. 2 we recall the definition of security for blind signature schemes. Our definitions are tailor-made for two-move blind signature schemes, because in the case of two-move signatures the issues of composition with other protocols go away (as discussed above). Other than that, our definitions are standard [1, 25, 28, 33]. We include bibliographic notes explaining that at the end of Sects. 2.1 and 2.2 that provide definitions of one-more unforgeability and blindness, respectively.

In Sect. 3 we give an overview of RSA-BSSA. We begin by giving a basic version of the scheme, in which the blind signature that a user obtains is a standard RSA-PSS signature on the user’s message \(\texttt{msg}\) (i.e. there is no \(\texttt{rand}\)). We also give two modifications of the basic scheme: a variant in which the signer’s RSA public key (Ne) is enhanced in a way that ensures that the exponent e is relatively prime to \(\varphi (N)\) using a technique of Goldberg, Reyzin, Sagga and Baldimtsi [21]. Finally, in Sect. 3.3 we give the variant that corresponds to the RSA-BSSA proposal from February 2022 [16]; in this variant, the public key is a standard RSA public key (Ne) and the signature on a message \(\texttt{msg}\) consists of a salt \(\texttt{rand}\) and the PSS signature on \((\texttt{msg}\circ \texttt{rand})\).

In Sect. 4 we justify the salt \(\texttt{rand}\): we show why it is difficult to prove that the basic scheme is blind without introducing additional assumptions, and show that, in the random-oracle model, both modifications give rise to blind signature schemes. We also show that the basic scheme is a blind token. Finally, in Sect. 5 we show that the basic scheme and both variants are one-more-unforgeable under the one-more-RSA assumption in the random-oracle model.

2 Definition of a Two-Move Blind Signature Scheme

The definition of a blind signature scheme we provide here applies only to two-move blind signatures; see prior work for more general definitions [8, 9, 29, 30]. First, in Definition 1 let us give the input-output specification for the five algorithms that constitute a two-move blind signature scheme. The key generation algorithm \(\textsf{KeyGen}\) and the signature verification algorithm \(\textsf{Verify}\) have the same input-output behavior as in a regular digital signature scheme.

The signing algorithm is broken down into three steps: (1) The signature recipient runs the \(\textsf{Blind}\) algorithm to transforms a message \(\texttt{msg}\) into its blinded form \(\mathtt {blinded\_msg}\); \(\mathtt {blinded\_msg}\) is sent to the signer. (2) The signer runs the algorithm \(\textsf{BSig}( SK ,\mathtt {blinded\_msg})\) to compute its response \(\mathtt {blinded\_sig}\), and then sends it to the signature recipient. (3) The signature recipient uses the algorithm \(\textsf{Finalize}\) to transform \(\mathtt {blinded\_sig}\) into a valid signature \(\sigma \) on its message \(\texttt{msg}\). More precisely:

Definition 1

(Input-output specification for a two-move blind signature scheme). Let \(\mathcal {S} = (\textsf{KeyGen}, \textsf{Blind}, \textsf{BSig}, \textsf{Finalize}, \textsf{Verify})\) be a set of polynomial-time algorithms with the following input-output specifications:

  • \(\textsf{KeyGen}(1^k)\rightarrow ( PK , SK )\) is a probabilistic algorithm that takes as input \(1^k\) (the security parameter represented in unary) and outputs the public signature verification key \( PK \) and a secret signing key \( SK \).

  • \(\textsf{Blind}( PK ,\texttt{msg})\rightarrow (\mathtt {blinded\_msg},\texttt{inv})\) is a probabilistic algorithm that takes as input the public key \( PK \) and a string \(\texttt{msg}\) and outputs a blinded message \(\mathtt {blinded\_msg}\) (which will be sent to the signer) and an auxiliary string \(\texttt{inv}\) (which will be used by \(\textsf{Finalize}\) to derive the final signature \(\sigma \)).

  • \(\textsf{BSig}( SK ,\mathtt {blinded\_msg})\rightarrow \mathtt {blinded\_sig}\) is an algorithm (possibly a probabilistic one) that takes as input the secret signing key \( SK \) and a string \(\mathtt {blinded\_msg}\) and outputs a blinded signature \(\mathtt {blinded\_sig}\).

  • \(\textsf{Finalize}( PK ,\texttt{inv},\mathtt {blinded\_sig})\rightarrow \sigma \) is an algorithm that takes as input the public signature verification key \( PK \), an auxiliary string \(\texttt{inv}\) and a blinded signature and outputs a signature \(\sigma \).

  • \(\textsf{Verify}( PK ,\texttt{msg},\sigma )\) is an algorithm that either accepts or rejects.

Next, let us define what it means for \(\mathcal {S}\) to constitute a correct blind signature scheme. On a high level, correctness means that if a signature \(\sigma \) was produced after both the signature recipient and the signer followed their corresponding algorithms, then this signature will be accepted by \(\textsf{Verify}\). More formally:

Definition 2

(Correct two-move blind signature). Let \(\mathcal {S} = (\textsf{KeyGen}, \textsf{Blind}\), \(\textsf{BSig}, \textsf{Finalize}\), \(\textsf{Verify})\) be a set of polynomial-time algorithms that satisfy the input-output specification for a two-move blind signature scheme (Definition 1). \(\mathcal {S}\) constitutes a correct two-move blind signature scheme if for all k, \(( PK , SK )\) output by \(\textsf{KeyGen}(1^k)\), strings \(\texttt{msg}\), \((\mathtt {blinded\_msg},\texttt{inv})\) output by \(\textsf{Blind}( PK ,\texttt{msg})\), \(\mathtt {blinded\_sig}\) output by \(\textsf{BSig}( SK ,\mathtt {blinded\_msg})\), and \(\sigma \) output by \(\textsf{Finalize}( PK \), \(\texttt{inv},\mathtt {blinded\_sig})\), \(\textsf{Verify}( PK ,\texttt{msg},\sigma )\) accepts.

2.1 Strong One-More Unforgeability

As discussed above, a blind signature scheme must satisfy one-more unforgeability: an adversarial user who obtained \(\ell \) signatures from the signer cannot produce \(\ell +1\) distinct message-signature pairs. Since we are limiting our attention to two-move blind signatures, the security experiment that captures it can allow the adversary oracle access to the algorithm \(\textsf{BSig}( SK ,\cdot )\). More formally:

Definition 3

(Strong one-more-unforgeability). Let \(\mathcal {S} = (\textsf{KeyGen}, \textsf{Blind}\),

\(\textsf{BSig}, \textsf{Finalize}, \textsf{Verify})\) be a set of polynomial-time algorithms that satisfy the input-output specification for a two-move blind signature scheme (Definition 1). For an oracle Turing machine \(\mathcal {A}\), the success probability \(p^{\mathcal {S}}_\mathcal {A}(k)\) of \(\mathcal {A}\) in breaking the strong one-more unforgeability of \(\mathcal {S}\) is the probability that \(\mathcal {A}\) is successful in the following experiment parameterized by k:

  • Experiment set-up The key pair is generated: \(( PK , SK )\leftarrow \textsf{KeyGen}(1^k)\).

  • Adversary’s execution The adversary \(\mathcal {A}\) is given oracle access to \(\textsf{BSig}( SK ,\cdot )\) and is run on input \( PK \); \(\mathcal {A}^{\textsf{BSig}( SK ,\cdot )}( PK )\) terminates with a set of message-signature pairs on its output tape: \(((\texttt{msg}_1,\sigma _1),\ldots ,(\texttt{msg}_n,\sigma _n))\), and a set of query-response pairs on its query tape:

    $$((\mathtt {blinded\_msg}_1,\mathtt {blinded\_sig}_1),\ldots ,(\mathtt {blinded\_msg}_\ell ,\mathtt {blinded\_sig}_\ell )).$$
  • The success criterion The number of distinct message-signature pairs \((\texttt{msg}_i\), \(\sigma _i)\) such that \(\textsf{Verify}( PK ,\texttt{msg}_i,\sigma _i)=1\) is at least \(\ell +1\), i.e. \(\mathcal {A}\) outputs more distinct signatures than the number of queries it made to \(\textsf{BSig}\).

\(\mathcal {S}\) satisfies the strong one-more-unforgeability property if for any polynomial-time adversary \(\mathcal {A}\), the value \(p^{\mathcal {S}}_\mathcal {A}(k)\) is negligible.

The history of this definition. Chaum’s original blind signatures papers [13, 14] did not contain a formal definition; in fact, they preceded the formal definition of security for a digital signature scheme.

The regular definition of unforgeability for digital signature schemes [22] does not apply to blind signatures. In the regular definition, the adversary wins the unforgeability game if it produces a signature on a message that the challenger never signed. However, the challenger in the blind signature game has no way of knowing which messages it has signed — that’s the whole point of blindness, and ideally, we want it to hold unconditionally.

Thus, Pointcheval and Stern [27, 28] came up with the notion of one-more unforgeability in which the adversary is considered successful if it outputs more distinct signed messages than the number of blind signing sessions it participated in. Pointcheval and Stern considered a more general structure of a blind signing protocol, not just the message-response exchange a-la our \(\textsf{Blind}\), \(\textsf{BSig}\), \(\textsf{Finalize}\) structure, and thus the issue of self-composition (i.e. what happened if the messages from the signer were adversarially interleaved with those of the adversarial users) needed to be carefully defined in their work. But, as Bellare et al. observed [6], for a protocol that has this simple two-move (i.e. message-response) structure, self-composition is for free, and so the one-more-unforgeability game can be formalized in relatively simple terms.

A stronger definition of unforgeability for blind signatures was given by Schröder and Unruh [33]. They consider the case when the adversary observes the inputs and outputs of honest users who engage in \(\ell \) blind signing protocols to obtain signatures on fewer than \(\ell \) distinct messages (i.e. some message is getting signed more than once). The adversary should not be able to get a signature on an additional message by directing honest users to get more than one signature on the same message. Schröder and Unruh showed that Pointcheval and Stern’s one-more-unforgeability definition is not sufficient to prevent the adversary from taking advantage of honest users this way; but strong one-more unforgeability is. Following their work, strong one-more unforgeability is the standard notion of unforgeability for blind signature schemes.

Our formulation of strong one-more unforgeability in Definition 3 uses Definition 6.1 of Bellare et al. [6], which is their definition of one-more unforgeability, as a starting point. Their formulation is tailored specifically to one-more unforgeability of the blind RSA-FDH, while ours generally applies to any two-move protocol consisting of \(\textsf{Blind}\), \(\textsf{BSig}\), and \(\textsf{Finalize}\). We also modified the success criterion to correspond to strong one-more unforgeability.

One might wonder why the security game is for only one signer. Indeed, we could extend the game to require that the adversary specify a number of signers and interact with each signer before outputting a set of message-signature pairs. The adversary would be deemed successful if, for one of the signers, the number of valid message-signature pairs from this signer produced by the adversary was greater than the number of the adversary’s queries to this signer. It is easy to see that extending the security game to such a multi-signer scenario would not make the definition stronger: a scheme that satisfies one-more unforgeability with one signer will also satisfy it with multiple (say, n) signers. The reduction would randomly pick one of the signers and would set up the game so that it knows the secret key of all but the selected signer; the selected signer is the one from the one-more-unforgeability challenger with one signer. If the adversary succeeds and the reduction guessed the signer correctly, then the reduction will succeed as well; since the guess is correct with probability 1/n, this shows that the two definitions are equivalent up to a security loss of 1/n. Although not addressed explicitly in the literature cited above, this is well-understood in the context of regular digital signatures [20] and thus it is the single-signer definitions that are standard in the blind signatures literature.

2.2 Blindness

Finally, a blind signature scheme must satisfy blindness, that is, it should be impossible to determine which query to the (adversarial) signer resulted in the (honest) signature recipient deriving a particular message-signature pair. For this security game, the adversary picks the public key adversarially; it also picks two messages whose signatures the challenger will try to obtain. The challenger will try to obtain signatures on these messages in random order selected by picking a random bit b; the adversary’s goal is to tell in what order. The adversary gets to see the resulting signatures before producing an output.

A trivial strategy for the adversary would be to issue a valid signature in response to one of the queries but not the other. In order to rule out this strategy, the challenger allows the adversary to see the resulting signatures only if both of them verify. If one (or both) of the signatures does not verify, the adversary will have to guess the bit b based on its view of the interaction with the user in the blind signing protocol.

The formal definition below applies only to two-move blind signature schemes, but it can be generalized to any protocol structure.

Definition 4

(Blindness). Let \(\mathcal {S} = (\textsf{KeyGen}, \textsf{Blind}, \textsf{BSig}, \textsf{Finalize}, \textsf{Verify})\) be a set of polynomial-time algorithms that satisfy the input-output specification for a two-move blind signature scheme (Definition 1). For an interactive algorithm \(\mathcal {A}\), let \(q^{\mathcal {S}}_\mathcal {A}(k,b)\) be the probability that \(\mathcal {A}\) outputs 0 in the following experiment parameterized by k and the bit b:

  • \(\mathcal {A}\) is invoked \(\mathcal {A}(1^k)\) selects a public key \( PK \) (whose length is appropriate for the security parameter k) and two messages \(\texttt{msg}_0\) and \(\texttt{msg}_1\).

  • \(\mathcal {A}\) acts as the blind signer For \(i \in \{0,1\}\), the challenger computes the values \((\mathtt {blinded\_msg}_i,\texttt{inv}_i)\leftarrow \textsf{Blind}( PK ,\texttt{msg}_i)\) and sends \((\mathtt {blinded\_msg}_b,\) \(\mathtt {blinded\_msg}_{1-b})\) to \(\mathcal {A}\), receiving \((\mathtt {blinded\_sig}_b,\) \(\mathtt {blinded\_sig}_{1-b})\) in response.

  • \(\mathcal {A}\) receives the signatures For \(i \in \{0,1\}\), the challenger computes

    $$\sigma _i = \textsf{Finalize}( PK ,\texttt{inv}_i,\mathtt {blinded\_sig}_i)$$

    If \(\textsf{Verify}( PK ,\texttt{msg}_0,\sigma _0) = \textsf{Verify}( PK ,\texttt{msg}_1,\sigma _1) = 1\), it sends \((\sigma _0,\sigma _1)\) to \(\mathcal {A}\); else it sends \(\bot \) to \(\mathcal {A}\).

  • \(\mathcal {A}\)’s output \(\mathcal {A}\) outputs some value \(\texttt {output}\).

\(\mathcal {A}\)’s advantage \(\textbf{Adv}_\mathcal {A}^{\mathcal {S}}(k)\) in breaking the blindness of \(\mathcal {S}\) is defined as \(\textbf{Adv}_\mathcal {A}^{\mathcal {S}}(k) \mathrel {\mathop :}= |q^{\mathcal {S}}_\mathcal {A}(k,0)-q^{\mathcal {S}}_\mathcal {A}(k,1)|\). \(\mathcal {S}\) satisfies blindness if for any probabilistic polynomial-time \(\mathcal {A}\), \(\textbf{Adv}_\mathcal {A}^{\mathcal {S}}(k)\) is negligible.

The history of this definition. The first formalization of the blindness property of a digital signature scheme was given by Juels, Luby and Ostrovsky [25]; in this initial formulation, the public key for the scheme was generated honestly. Abdalla, Namprepre and Neven [1] improved the definition by considering a signer who is already adversarial at key generation time; they also gave a more careful treatment of the compositional issues. The definition given above corresponds to the Abdalla et al. version of the blindness definition as it applies to the case of a two-move signing protocol. It is considered standard in the literature.

Again, one might wonder why the number of messages in the security game is limited to just two, \(\texttt{msg}_0\) and \(\texttt{msg}_1\); and why the user just interacts with the signer \(\mathcal {A}\) once. It is relatively straightforward to show that extending the definition to allow more than two messages or to give the signer more chances to interact with the challenger will not strengthen the definition: a reduction playing middleman between the multi-message or multi-interaction adversary and the two-message single interaction challenger will inherit a non-negligible fraction of the adversary’s advantage.

2.3 A New Definition: Blind Tokens

In certain applications, the messages being signed are chosen at random from some message space \(\mathcal {M}\). If all goes well during the signing protocol, the user gets a unique authenticated token, i.e. a signature on this random message. This token should be blind, i.e. unlinkable to the specific interaction with the signer in which it was obtained. If for some reason the signing protocol fails to return a valid signature on this message, the message may be discarded.

Let us formalize the blindness requirement of such applications by introducing a new cryptographic primitive: a blind token scheme. A blind token scheme will have the same input-output specification as a blind signature scheme, and must also be strongly one-more unforgeable; however, the notion of blindness it needs to satisfy is somewhat weaker. Unlike the blind signature blindness experiment, here the two messages \(\texttt{msg}_0\) and \(\texttt{msg}_1\) are picked from the same distribution \(\mathcal {M}\). The adversary has some influence on how they are picked: \(\mathcal {M}\) takes as input the adversary’s public key \( PK \) as well as some auxiliary input \( aux \). In the full version of this paper [26], we also give a version of this definition that corresponds to the one-more unforgeability, rather than strong one-more unforgeability.

Definition 5

(Strongly unforgeable blind token scheme). Let \(\mathcal {S} = \textsf{KeyGen},\) \(\textsf{Blind}, \textsf{BSig}, \textsf{Finalize},\) \(\textsf{Verify}\) be a set of polynomial-time algorithms that satisfy the input-output specification for a two-move blind signature scheme (Definition 1) and the strong one-more unforgeability property (Definition 3). Let \(\mathcal {M}\) be a message sampling algorithm that, on input the security parameter \(1^k\), a public key \( PK \), and auxiliary input \( aux \), outputs a string \(\texttt{msg}\).

For an interactive algorithm \(\mathcal {A}\) and an efficient message sampling algorithm \(\mathcal {M}\), let \(q^{\mathcal {S,M}}_\mathcal {A}(k,b)\) be the probability that \(\mathcal {A}\) outputs 0 in the following experiment parameterized by the security parameter k and the bit b:

  • \(\mathcal {A}\) is invoked \(\mathcal {A}(1^k)\) selects a public key \( PK \) (whose length is appropriate for the security parameter k), and auxiliary input \( aux \) for the message sampling algorithm.

  • \(\mathcal {A}\) acts as the blind signer For \(i \in \{0,1\}\), let \(\texttt{msg}_i \leftarrow \mathcal {M}(1^k, PK , aux )\) be messages randomly selected by the challenger, who then proceeds to compute the values \((\mathtt {blinded\_msg}_i,\texttt{inv}_i)\leftarrow \textsf{Blind}( PK ,\texttt{msg}_i)\) and send \((\mathtt {blinded\_msg}_b,\) \(\mathtt {blinded\_msg}_{1-b})\) to \(\mathcal {A}\), receiving \((\mathtt {blinded\_sig}_b,\) \(\mathtt {blinded\_sig}_{1-b})\) in response.

  • \(\mathcal {A}\) receives the signatures For \(i \in \{0,1\}\), the challenger computes

    $$\sigma _i = \textsf{Finalize}( PK ,\texttt{inv}_i,\mathtt {blinded\_sig}_i)$$

    If \(\textsf{Verify}( PK ,\texttt{msg}_0,\sigma _0) = \textsf{Verify}( PK ,\texttt{msg}_1,\sigma _1) = 1\), it sends \((\texttt{msg}_0,\sigma _0,\texttt{msg}_1,\sigma _1)\) to \(\mathcal {A}\); else it sends \(\bot \) to \(\mathcal {A}\).

  • \(\mathcal {A}\)’s output \(\mathcal {A}\) outputs some value \(\texttt {output}\).

\(\mathcal {A}\)’s advantage \(\textbf{Adv}_\mathcal {A}^{\mathcal {S,M}}(k)\) is defined as \(\textbf{Adv}_\mathcal {A}^{\mathcal {S,M}}(k) \mathrel {\mathop :}= |q^{\mathcal {S,M}}_\mathcal {A}(k,0)-q^{\mathcal {S,M}}_\mathcal {A}(k,1)|\). \(\mathcal {S}\) is a strongly unforgeable blind token scheme for message space \(\mathcal {M}\) if for any probabilistic polynomial-time \(\mathcal {A}\), \(\textbf{Adv}_\mathcal {A}^{\mathcal {S,M}}(k)\) is negligible.

The motivation for this definition. This definition is new; generally, when analyzing proposed standards, introducing new notions of security is a bad idea. An algorithm adapted for practical use should satisfy a notion of security that is well-understood and established. Unfortunately, as we will see in Sect. 4.4, at least one scheme that is already used in practice does not satisfy the established definition of a blind signature scheme; however, we show that it satisfies Definition 5, and therefore can still be used securely in some limited applications.

In the ecash application as originally envisioned by Chaum, the message \(\texttt{msg}\) is simply a string that is sampled uniformly at random; it should be long enough that it is unlikely that the same string can be sampled twice. Once the user obtains the signature \(\sigma \) on \(\texttt{msg}\), the pair \((\texttt{msg},\sigma )\) can be viewed as an e-coin. \(\texttt{msg}\) is the coin’s serial number, while \(\sigma \) can be thought of as its proof of validity. However, if the user fails to obtain \(\sigma \) for this \(\texttt{msg}\), then \(\texttt{msg}\) has no value and can be discarded. The reason that blind tokens give users of this system the desired privacy is that each user draws the serial numbers for her coins from exactly the same distribution as all the other users.

3 The RSA-BSSA Scheme

Let us review the blind signature scheme from the RSA blind signature scheme with appendix (RSA-BSSA) proposal by Denis, Jacobs and Wood [16, 17].

High-level description of the basic scheme. In the RSA-PSS signature scheme [8, 9, 29, 30], the signature on a message M is the RSA inverse of a special encoding (called the PSS encoding) m of M. At a high level, the basic version of RSA-BSSA is reminiscent of Chaum’s original blind signature scheme: it is the blind version of the RSA-PSS signature scheme. Following RSA-PSS, the key generation algorithm generates an RSA key pair \( PK =(N,e)\), \( SK = d\), where \(ed \equiv 1 \bmod \varphi (N)\). Following Chaum, in order to obtain a blind signature on a message M, the user first generates a PSS encoding m of M, then blinds it using a random \(r \leftarrow \mathbb {Z}_N^*\) obtaining \(z = mr^e \bmod N\), which is (hopefully) an element of \(\mathbb {Z}_N^*\) that is distributed independently of M. Then he gets from the signer the blinded signature \(y=z^d \bmod N\), and unblinds it to obtain and output \(s = yr^{-1}\bmod N\). To verify a signature s on a message M, follow the same algorithm as RSA-PSS verification: check that the PSS decoding of \(m = s^e \bmod N\) is the message M. Let us fill in the missing details.

Hash functions. For the PSS encoding, the scheme will use two cryptographic hash functions \(\textsf{Hash}\) and \(\textsf{MGF}\) the same way that PSS does. Both \(\textsf{Hash}\) and \(\textsf{MGF}\) take as input a string of bytes S and an integer \(\ell \), and output a string of \(\ell \) bytes. In the security analysis, both will be treated as random oracles. Even though their input-output specifications and security requirements match, it may be helpful to have functions with different implementations because, as their names suggest, the function \(\textsf{Hash}\) will potentially take a long string S and output a shorter string; while \(\textsf{MGF}\) (which stands for “mask generation function”) will take as input a short “seed” string and output a longer one.

Other subroutines. Since we are analyzing not just an algorithm but a proposed standard, it is important to note that any software program implementing this standard will have to recognize two distinct types: integers (on which integer operations are performed) and strings of bytes (that lend themselves to string operations, such as concatenation and exclusive-or). \(\textsf {I2OSP}\) is a procedure that converts an integer into an octet string (an octet is just the IETF terminology for the eight-bit byte). On input an integer and the desired length \(\ell \), it outputs the binary representation of the integer using \(\ell \) octets if \(\ell \) is sufficiently large, or fails otherwise. \(\textsf {OS2IP}\) reverses this process: given a string, it interprets it as the binary representation of an integer and outputs that integer.

Parameters. The scheme is parameterized by k, which is the bit length of the RSA modulus (strictly speaking, there are two parameters: \( kLen \) and \( kBits \), representing its length in bytes and in bits, respectively; but for the purposes of the analysis the bit length is sufficient). The value \( emLen = \lceil (k-1)/8 \rceil \) denotes the number of octets needed to represent a PSS encoding; i.e., a PSS encoding will always take up exactly \(k-1\) bits.

As in PSS, the choice of the functions \(\textsf{Hash}\) and \(\textsf{MGF}\) and the parameters \( hLen \) and \( sLen \) are additional design choices (parameters, if you will) that define an instantiation of the scheme. The value \( hLen \) denotes the length in octets of the output of the hash function \(\textsf{Hash}\) that’s used in the scheme. It is important that \( hLen \) be set up in such a way that, in the random oracle model, the probability that two distinct inputs to \(\textsf{Hash}(\cdot , hLen )\) yield the same output (i.e. collide) be minuscule; an adversary whose running time is t can generate at most t such inputs; thus \(2^{4 hLen }\) needs to be a generous upper bound on t. The value \( sLen \) denotes the length (in octets) of the salt of the PSS encoding.

Our security analysis requires that \( emLen \ge \max (2 hLen , hLen + sLen ) + 2\).

PSS encoding and decoding procedures. Recall that, in RSA-PSS, the signing algorithm is broken down into two steps. The first step does not involve the secret key: it simply encodes the input message in a special way. The second step uses the secret key in order to compute the signature corresponding to the encoding obtained in step one. Analogously, signature verification consists of two steps as well: the first step uses the public key in order to compute what may turn out to be an encoding of the message; the second step verifies that the string obtained in step one is indeed a valid encoding of the message.

When describing RSA-BSSA below, we invoke the encoding and decoding procedures from the IETF standard [30]: \(\textsf{PSSEncode}(\texttt{msg},\ell )\) is the function that, on input a message \(\texttt{msg}\) and an integer \(\ell \), produces a string \(\texttt{EM}\) (encoded message) of \(\lceil \ell /8 \rceil \) octets whose \(\ell \) rightmost bits constitute a PSS encoding of \(\texttt{msg}\). \(\textsf{PSSVerify}(\texttt{msg},\texttt{EM},\ell )\) verifies that \(\texttt{EM}\) is consistent with the output of \(\textsf{PSSEncode}(\texttt{msg},\ell )\). For an RSA modulus of bit length k, the PSS scheme will use \(\ell =k-1\), so \(\texttt{EM}\) will be of length \( emLen = \lceil (k-1)/8 \rceil \).

Specifically (but briefly), \(\textsf{PSSEncode}(\texttt{msg},\ell )\) works as follows: first, hash \(\texttt{msg}\) to obtain \(\texttt{mHash}= \textsf{Hash}(\texttt{msg}, hLen )\), and pick a random string \(\texttt{salt}\) of length \( sLen \) bytes (octets). Compute \(H = \textsf{Hash}(0^{64}\circ \texttt{mHash}\circ \texttt{salt})\), and use it to compute a mask \(\texttt{dbMask}= \textsf{MGF}(H, emLen - hLen - 1)\) and use it to mask the salt: \(\texttt{maskedDB}= \texttt{DB}\oplus \texttt{dbMask}\), where \(\texttt{DB}\) is \(\texttt{salt}\) padded (to make sure that the resulting string is of the correct length) with a pre-defined string. Then output the encoded message \(\texttt{EM}= \texttt{maskedDB}\circ H \circ 0xBC\).

In turn, \(\textsf{PSSVerify}(\texttt{msg},\texttt{EM},\ell )\) begins by parsing \(\texttt{EM}= \texttt{maskedDB}\circ H \circ 0xBC\). Then it computes \(\texttt{dbMask}\) as above to unmask \(\texttt{salt}\) from \(\texttt{maskedDB}\) (it rejects if the padding was incorrect) and verifies that \(H = \textsf{Hash}(0^{64}\circ \texttt{mHash}\circ \texttt{salt})\) for \(\texttt{mHash}= \textsf{Hash}(\texttt{msg}, hLen )\). See the full version of this paper for details about the history of PSS. In Appendix B we provide a more detailed description of the verification algorithm for PSS.

3.1 The Basic Scheme

We begin by describing the basic scheme in which the user obtains from the signer an RSA-PSS signature on the message \(\texttt{msg}\). As usual, the scheme consists of a key generation algorithm, a protocol for obtaining signatures, and a signature verification algorithm.

  • Key generation The key generation algorithm is the standard RSA key generation: The public key is \( PK =(N,e)\), where N is an RSA modulus of length k (k is given as input to the key generation algorithm, in unary, i.e. \(1^k\)), and e is relatively prime to \(\varphi (N)\). The secret key is \( SK =(N,d)\) such that \(ed \equiv 1 \bmod \varphi (N)\). The exact specification is as described in the PKCS#1 standard.

  • Blind signing protocol The protocol consists of three algorithms: \(\textsf{Blind}\), \(\textsf{BSig}\), and \(\textsf{Finalize}\). On input a message \(\texttt{msg}\) that the client wishes to get a signature for, the client runs \(\textsf{Blind}( PK ,\texttt{msg})\) and obtains \((\mathtt {blinded\_msg},\texttt{inv})\). The server runs the algorithm \(\textsf{BSig}( SK ,\mathtt {blinded\_msg})\) and outputs a blinded signature \(\mathtt {blinded\_sig}\). The user runs \(\textsf{Finalize}( PK ,\texttt{msg},\mathtt {blinded\_sig},\texttt{inv})\) to derive the signature \(\sigma \). The three algorithms are as follows:

    • \(\textsf{Blind}( PK ,\texttt{msg})\) Compute a PSS encoding of \(\texttt{msg}\): \(\texttt{EM}= \textsf{PSSEncode}(\texttt{msg},k-1)\), and let \(m = \textsf {OS2IP}(\texttt{EM})\) be the corresponding integer. Next, sample \(r \leftarrow \mathbb {Z}^*_N\), compute \(z = mr^e \bmod N\), and make sure that \(z\in \mathbb {Z}^*_N\). Compute \(r_{ inv }=r^{-1} \bmod N\), and output \((z,r_{ inv })\) as octet strings, i.e., output \(\mathtt {blinded\_msg}= \textsf {I2OSP}(z, kLen )\), \(\texttt{inv}= \textsf {I2OSP}(r_{ inv }, kLen )\).

    • \(\textsf{BSig}( SK ,\mathtt {blinded\_msg})\) First, check that the string \(\mathtt {blinded\_msg}\) is of bit length k, and reject if it is not. Next, convert into a k-bit integer \(m = \textsf {OS2IP}(\mathtt {blinded\_msg})\). Output the binary representation of \(s = m^d \bmod N\), i.e. \(\mathtt {blinded\_sig}= \textsf {I2OSP}(s, kLen )\).

    • \(\textsf{Finalize}( PK ,\texttt{msg},\mathtt {blinded\_sig},\texttt{inv})\) Convert \(\mathtt {blinded\_sig}\) and \(\texttt{inv}\) into integers using the OS2IP procedure: \(z = \textsf {OS2IP}(\mathtt {blinded\_sig})\), \(r_{ inv } = \textsf{OS2IP}(\texttt{inv})\); compute \(s = zr_{ inv } \bmod N\). The signature is the binary representation of s, i.e. \(\sigma = \textsf {I2OSP}(s, kLen )\). Finally, if \(\textsf{PSSVerify}( PK ,\texttt{msg},\sigma )\), then output \(\sigma \), else fail.

  • Verification The verification algorithm calls \(\textsf{PSSVerify}( PK ,\texttt{msg},\sigma )\). (Described in more detail in Appendix B.)

The following theorem follows easily by inspection:

Theorem 1

The RSA-BSSA scheme is correct.

As we will show in Sect. 5, it also satisfies one-more unforgeability under the one-more-RSA assumption [6]. However, as we will explain in more detail in Sect. 4, it is not clear whether or not this construction satisfies blindness.

3.2 RSA-BSSA, Version A

The basic construction described above (Sect. 3.1) results in a perfectly blind signing protocol whenever \( PK =(N,e)\) where the public exponent e is relatively prime to \(\varphi (N)\): in that case, for any \(m \in \mathbb {Z}_N^*\), selecting \(r \leftarrow \mathbb {Z}_N^*\) uniformly at random and outputting \(z = mr^e\) ensures that z is a uniformly random element of \(\mathbb {Z}_N^*\). This is because, of course, if there exists d such that \(ed \equiv 1 \bmod \varphi (N)\), then for each \(z \in \mathbb {Z}_N^*\), there exists a unique \(r = (z/m)^d\) such that \(z=mr^e\).

Thus, in order to ensure blindness, it is sufficient to ensure that e is relatively prime to \(\varphi (N)\). Consider the following variant of RSA-BSSA, in which a public key contains a proof that e is relatively prime to \(\varphi (N)\) as described by Goldberg, Reyzin, Sagga and Baldimtsi [21].

It will require the additional parameter \(\kappa \), which is a statistical security parameter. Further, it will require a function \(R_k\) that, on input three integers, outputs a random integer \(0 \le a < 2^{k-1}\); such \(R_k\) can be constructed, for example, from \(\textsf{MGF}\). Let \(e'\) be a prime that is small enough that checking that \((N,e) \in L_{e'}\) can be done efficiently, where \(L_{e'} = \{(N,e)~\mid ~N, e>0 \text{ and } \text{ no } \text{ prime } \text{ less } \text{ than } e'{} { divides}e\}\). In practice, e is often prime and small enough that setting \(e'=e\) works. We let \(e'\) be a system-wide parameter, so each procedure below receives it as input. Finally, let \(\ell = \lceil \kappa /\log _2(e')\rceil \).

  • Key generation On input the desired modulus length k and a statistical security parameter \(\kappa \), run the RSA key generation algorithm as in the basic protocol (Sect. 3.1) to obtain (Ne) and d. Next, compute a proof \(\pi \) that e is relatively prime to \(\varphi (N)\), as follows: for \(1 \le i \le \ell \), let \(a_i = R_k(N,e,i)\), compute \(b_i = {a_i}^d \bmod N\), and let \(\pi = b_1,\ldots ,b_\ell \). The public key is \( PK =(N,e,\pi )\), the secret key is \( SK =(N,d)\).

  • Blind signing protocol Before running the signing protocol, the user verifies that the public key \( PK = (N,e,\pi )\) is well-formed: let \(\pi = b_1,\ldots ,b_\ell \); for \(1 \le i \le \ell \), check that \(b_i^e = R_k(N,e,i) \bmod N\). If one of the checks fails, fail. Else, run the \(\textsf{Blind}\), \(\textsf{BSig}\), and \(\textsf{Finalize}\) algorithms as described in Sect. 3.1.

  • Verification As in Sect. 3.1, return \(\textsf{PSSVerify}( PK ,\texttt{msg},\sigma )\).

3.3 RSA-BSSA, Version B

As we will see in Sect. 4.2, another way to ensure blindness is to modify the construction in such a way that the value \(\texttt{mHash}\) incorporated into the PSS encoding of the message to be signed reveals nothing about this message. This calls for a simple modification of the basic protocol that requires that, instead of invoking the signing protocol directly on the message \(\texttt{msg}\), the user invokes it on the message \(\texttt{msg}' = \texttt{msg}\circ \texttt{rand}\), where \(\texttt{rand}\) is a random value of \(\kappa \) bits, where \(\kappa \) is a security parameter. More precisely:

  • Key generation Run the RSA key generation algorithm as in the basic protocol (Sect. 3.1) to obtain \( PK = (N,e)\) and \( SK = d\).

  • Blind signing protocol The user generates a random string \(\texttt{rand}\) of \(\kappa \) bits, and runs the signing protocol in Sect. 3.1 on input \(\texttt{msg}' = \texttt{msg}\circ \texttt{rand}\), and obtains from it the signature \(\sigma '\) on the message \(\texttt{msg}'\). Output the signature \(\sigma = (\sigma ',\texttt{rand})\).

  • Verification Following Sect. 3.1, on input \(\texttt{msg}\) and \(\sigma = (\sigma ',\texttt{rand})\), the verification algorithm makes sure that \(\texttt{rand}\) consists of \(\kappa \) bits, rejects if it does not, and then returns the output of \(\textsf{PSSVerify}( PK ,\texttt{msg}',\sigma ')\), where \(\texttt{msg}' = \texttt{msg}\circ \texttt{rand}\).

4 Blindness of RSA-BSSA

In the blindness experiment, the adversary picks the modulus N; thus we cannot assume that it is a proper RSA modulus. Therefore, in order to understand how much information such an adversary can learn in the blindness experiment, we must consider the structure of the group \(\mathbb {Z}_N^*\) for arbitrary N.

Lemma 1

Let \(N>1\) be any odd integer, let \(N=\prod _{i=1}^\ell p_i^{\alpha _i}\) be is its prime factorization. Then \(\mathbb {Z}_N^*\) is of size \(\varphi (N) = \prod _{i=1}^{\ell }\varphi (p_i^{\alpha _i}) = \prod _{i=1}^{\ell }p_i^{\alpha _i-1}(p_i-1)\) is isomorphic to \(\mathbb {Z}_{\varphi (p_1^{\alpha _1})}\times \mathbb {Z}_{\varphi (p_2^{\alpha _2})} \times \ldots \times \mathbb {Z}_{\varphi (p_{\ell }^{\alpha _\ell })}\).

For the proof, we refer to Sect. 7.5 of Shoup [34]. The lemma implies that every element \(x\in \mathbb {Z}_N^*\) can be viewed as a vector \((x_1,\ldots ,x_\ell ) \in \mathbb {Z}_{p_1^{\alpha _1-1}(p_1-1)}\times \mathbb {Z}_{p_2^{\alpha _2-1}(p_2-1)} \times \ldots \times \mathbb {Z}_{p_\ell ^{\alpha _\ell -1}(p_\ell -1)}\), and vice versa.

Let \(\varPsi _N\) denote this isomorphism; when N is clear from the context, we will write it as \(\varPsi \). Moreover, there is a (not necessarily efficient) algorithm that computes \(\varPsi _N\), as follows: on input \(x\in \mathbb {Z}_N^*\), compute \(x_i = x \bmod p^{\alpha _i}_i\), and then find \(\chi _i \in \mathbb {Z}_{p_i^{\alpha _i-1}(p_i-1)}\) such that \(x = g_i^{\chi _i}\), where \(g_i\) is a generator of \(\mathbb {Z}^*_{p_i^{\alpha _i}}\) (it exists by Theorem 7.28 in Shoup [34]).

We also refer the reader to Shoup [34] for the Chinese Remainder Theorem; below, by \( CRT (x_1,\ldots ,x_\ell )\) we denote the element \(x\in \mathbb {Z}_N\) such that \(x = x_i \bmod p_i^{\alpha _i}\), where \(N=\prod _{i=1}^\ell p_i^{\alpha _i}\) is the prime factorization of N.

Definition 6

(Roots and residues). Let N and e be any positive integers, and let \(m\in \mathbb {Z}_N^*\). Let the set \( Roots _{N,e}(m) = \{s\in \mathbb {Z}_N^*\mid s^e = m \}\). Let the set \( Residues _{N,e} = \{m\in \mathbb {Z}_N^*\mid Roots _{N,e}(m)\ne \emptyset \}\).

Lemmas 23 and 4, and Corollaries 1 and 2 are well-known; for completeness, their proofs are included in the full version of this paper [26].

Lemma 2

Let \(N>1\) be any odd integer. If an integer e is relatively prime to \(\varphi (N)\), then the distribution \(D_0(N,e) = \{r \leftarrow \mathbb {Z}_N^*~:~r\}\) is identical to the distribution \(D_1(N,e) = \{r \leftarrow \mathbb {Z}_N^*~:~r^e\}\).

Lemma 3

Let \(p>2\) be a prime number, and let \(e\ge 2\) and \(\alpha \ge 1 \) be integers. Let \(g=\gcd (e,p^{\alpha -1}(p-1))\). Then for any \(m\in Residues _{p^\alpha ,e}\), \(| Roots _{p^\alpha ,e}(m)| =g\). I.e. either \(m\notin Residues _{p^\alpha ,e}\)), or it has exactly g \(e^ th \) roots.

Corollary 1

Let \(p>2\) be a prime number, and let \(e>1\) and \(\alpha \ge 1 \) be integers. Let \(g=\gcd (e,p^{\alpha -1}(p-1))\), and let \(q = p^{\alpha -1}(p-1)/g\). Let \(m \in Residues _{p^\alpha ,e}\), and let \(s\in Roots _{p^\alpha ,e}(m)\). Then \( Roots _{p^\alpha ,e}(m) = \{s_k~\mid ~0\le k \le g-1, s_k = \varPsi ^{-1}(\sigma +kq)\}\), where \(\sigma = \varPsi (s)\).

Lemma 4

Let \(N>1\) be an odd integer, and let \(\prod _{i=1}^{\ell } p_i^{\alpha _i}\) be its prime factorization. Let \(e>1\) be an integer. Let \(g_i = \gcd (e,p_i^{\alpha _i-1}(p_i-1))\). Then for any \(m\in Residues _{N,e}\), \(| Roots _{N,e}(m)| =\prod _{i=1}^\ell g_i\).

Corollary 2

Let \(N>1\) be an odd integer, and let \(\prod _{i=1}^{\ell } p_i^{\alpha _i}\) be its prime factorization. Let \(e>1\) be an integer. Let \(g_i = \gcd (e,p_i^{\alpha _i-1}(p_i-1))\), and let \(q_i = p_i^{\alpha _i-1}(p_i-1)/g_i\). Let \(m\in Residues _{N,e}\), let s be its \(e^{ th }\) root, and let \(\varPsi (s) = (\sigma _1,\ldots ,\sigma _\ell )\). Then \( Roots _{N,e}(m) = \{ CRT (s_{1,k_1},\ldots ,s_{\ell ,l_\ell })~\mid ~ \forall 1\le i \le \ell , 0\le k_i \le g_i-1, s_{i,k_i} = \varPsi _{p_i^{\alpha _i}}^{-1}(\sigma _i+k_iq_i) \}\).

Lemma 5

Let \(N>1\) be an odd integer, and \(e>1\) be an integer. Then r selected as follows is a uniformly random element of \(\mathbb {Z}_N^*\): first, select y uniformly at random from \( Residues _{N,e}\). Then, select r uniformly at random from \( Roots _{N,e}(y)\).

Proof

By Lemma 4, every element of \( Residues _{N,e}\) has the same number of roots, and so selecting a random element of \( Residues _{N,e}\) and then picking one of its roots at random is equivalent to picking a random element of \(\mathbb {Z}_N^*\).    \(\square \)

Lemma 6

Let \(N>1\) be an odd integer, and \(e>1\) be an integer. Let \(m \in Residues _{N,e}\). Let z be selected uniformly at random from \( Residues _{N,e}\); let \(y = z/m\). Then y is a uniformly random element of \( Residues _{N,e}\).

Proof

Let \(y\in Residues _{N,e}\). y is selected whenever the experiment chooses \(z = my\); this happens with probability \(1/| Residues _{N,e}|\).    \(\square \)

In our analysis below, it will be important that even if the adversary picks an \(e^{ th }\) root u of the value \(z = mr^e\) (recall that z is what the signature recipient sends to the signer in order to get the message signed), it still cannot alter the distribution of the resulting signature. We will see that the signature \(s = u/r\) is a member of \( Roots _{N,e}(m)\) that is independent of u as long as r had been picked uniformly at random. In other words, as long as r is picked uniformly at random, s is random as well, no matter what the adversary does. This is captured in the following lemma:

Lemma 7

Let \(N>1\) be an odd integer, and \(e>1\) be an integer. Then for all \(m,z \in Residues _{N,e}\), \(u \in Roots _{N,e}(z)\), the following outputs a uniformly random element of \( Roots _{N,e}(m)\): pick \(r \leftarrow Roots _{N,e}(z/m)\), output u/r.

Proof

Consider \(N=p^\alpha \) for some prime p; the general case follows via the Chinese Remainder Theorem. Let \(s_0\) be the smallest (in absolute value) element of \( Roots _{p^\alpha ,e}(m)\), and let \(r_0\) be the smallest element of \( Roots _{p^\alpha ,e}(z/m)\). Let \(\varPsi (s_0) = \sigma \), \(\varPsi (r_0) = \rho \), \(\varPsi (u) = \upsilon \), \(g=\gcd (e,p^{\alpha -1}(p-1))\), and \(q = p^{\alpha -1}(p-1)/g\).

By Corollary 1, \( Roots _{p^\alpha ,e}(m) = \{s_k~\mid ~0\le k \le g-1, s_k = \varPsi ^{-1}(\sigma +kq)\}\). Since \(u/r_0\) is an \(e^{ th }\) root of m, \(u/r_0 = \varPsi ^{-1}(\sigma +nq)\) for some \(0\le n < g\). Also by Corollary 1, \( Roots _{p^\alpha ,e}(m/z) = \{r_k~\mid ~0\le k \le g-1, r_k = \varPsi ^{-1}(\rho +kq)\}\). Selecting \(r_k\) uniformly at random corresponds to picking \(k \leftarrow \{0,\ldots , g-1\}\), and results in outputting \(u/r_k = \varPsi ^{-1}(\upsilon -(\rho +kq)) = \varPsi ^{-1}((\upsilon -\rho ) - kq) = \varPsi ^{-1}((\sigma +nq) - kq) = \varPsi ^{-1}(\sigma + (n-k)q)\). Since k is random, \(n-k \bmod g\) is also a random element of \(\{0,\ldots , g-1\}\), and therefore the output \(r_ku = s_{(n-k)\bmod g}\) is uniformly random element of \( Roots _{p^\alpha ,e}(m)\).    \(\square \)

4.1 Blindness of the Signing Protocol

Let us consider \(\mathcal {A}\)’s interaction with the blindness challenger, and then analyze what information \(\mathcal {A}\) learns as a result of this interaction. For simplicity, below we omit the integer-to-string conversions and, when clear from context that integers in question are elements of \(\mathbb {Z}_N^*\), we omit “\(\bmod N\).”

  • \(\mathcal {A}\) is invoked \(\mathcal {A}(1^k)\) selects a public key \( PK = (N,e)\) and two messages \(\texttt{msg}_0\) and \(\texttt{msg}_1\).

  • \(\mathcal {A}\) acts as the blind signer For \(j \in \{0,1\}\), the challenger computes \(\texttt{EM}_j = \textsf{PSSEncode}(\texttt{msg}_j,k-1)\); let \(m_j = \textsf {OS2IP}(\texttt{EM}_j)\) be the corresponding integer. Next, sample \(r_j \leftarrow \mathbb {Z}_N\), compute \(z_j = m_jr_j^e\). Compute \(\texttt{inv}_j=r_j^{-1}\). The challenger sends to \(\mathcal {A}\) the values \(z_b\) and \(z_{1-b}\).

  • \(\mathcal {A}\) receives the signatures Upon receipt of \(u_b\) and \(u_{1-b}\) from the signer, the challenger computes \(s_0 = u_0/r_0\) and \(s_1 = u_1/r_1\). If both signatures verify, i.e. \(s_0^e=m_0\) and \(s_1^e=m_1\), it sends \((s_0,s_1)\) to \(\mathcal {A}\); else it sends \(\bot \) to \(\mathcal {A}\).

  • \(\mathcal {A}\)’s output \(\mathcal {A}\) outputs some value \(\texttt {output}\).

Claim 1

If e is relatively prime to \(\varphi (N)\), then \(z_0\) and \(z_1\) (sent to \(\mathcal {A}\) while it is acting as the blind signer) are both random elements of \(\mathbb {Z}_N^*\) and are distributed independently of b, \(m_0\) and \(m_1\).

Proof

Follows immediately from Lemma 2.    \(\square \)

Claim 2

If e is relatively prime to \(\varphi (N)\), then \(\mathcal {A}\)’s view after receiving the signatures is independent of the bit b.

Proof

\(\mathcal {A}\) already knows, based on the values \(u_b\) and \(u_{1-b}\) it sent to the challenger in the previous step, whether it will receive the signatures or \(\bot \). If it receives the signatures, then there are unique values \(r_{0,b}\), \(r_{1,b}\) consistent with either \(b \in \{0,1\}\), and they were equally likely to have been chosen; see Lemma 2. If \(\mathcal {A}\) does not receive the signatures, then \(\mathcal {A}\) learns nothing.    \(\square \)

If e is not relatively prime to \(\varphi (N)\), then there are two cases, based on whether the signatures output by \(\textsf{Finalize}\) pass verification. The easy case is when the signatures output by \(\textsf{Finalize}\) do not both pass verification; then, the challenger sends \(\bot \) to the adversary and thus no additional information is revealed in this step. Let us show that:

Claim 3

If both signatures verify, \(s_0^e=m_0\) and \(s_1^e=m_1\), then \(\mathcal {A}\)’s view in the blindness experiment is independent of b.

Proof

Let us condition on the event that the signatures pass verification. In this case, the values \(m_0\), \(m_1\) computed by the challenger, as well as the value \(z_b\) and \(z_{1-b}\) the challenger sent to the signer must all be in the set \( Residues _{N,e}\). Let us consider a series of experiments.

Our first experiment is the case of running the blindness challenger with \(b=0\): The challenger begins by sampling \(m_0\) and \(m_1\) as PSS encodings of \(\texttt{msg}_0\) and \(\texttt{msg}_1\). Then, it samples \(r_0\leftarrow \mathbb {Z}_N^*\), \(r_1\leftarrow \mathbb {Z}_N^*\), computes \(z_0 = m_0r_0^e\) and \(z_1 = m_1r_1^e\), and sends \((z,z')=(z_0,z_1)\) to the adversary. The adversary responds with \((u_0,u_1)\), and the challenger computes the signatures \(s_0 = u_0/r_0\) and \(s_1 = u_1/r_1\).

By Lemma 5, instead of choosing \(r_0\) and \(r_1\) uniformly at random from \(\mathbb {Z}_N^*\) and then setting \(z_0 = r_0^em_0\) and \(z_1=r_1^em_1\), one could equivalently choose \(y_0\), \(y_1\) uniformly at random from \( Residues _{N,e}\), and then let \(z_0=y_0m_0\), \(z_1 = y_1m_1\), \(r_0 \leftarrow Roots _{N,e}(y_0)\), \(r_1 \leftarrow Roots _{N,e}(y_1)\); let us call the resulting experiment \(A_0\). By Lemma 6, this is equivalent to choosing \(z_0\) and \(z_1\) uniformly at random from \( Residues _{N,e}\), and letting \(r_0 \leftarrow Roots _{N,e}(z_0/m_0)\), \(r_1 \leftarrow Roots _{N,e}(z_1/m_1)\); let us call the resulting experiment \(B_0\). By Lemma 7, this is equivalent to picking \(z_0\) and \(z_1\) uniformly at random from \( Residues _{N,e}\) and sending the adversary the pair \((z,z')=(z_0,z_1)\), and upon receipt of \(u_0\) and \(u_1\) such that \(u_0^e = z_0\) and \(u_1^e = z_1\), outputting \(s_0 \leftarrow Roots _{N,e}(m_0)\), \(s_1 \leftarrow Roots _{N,e}(m_1)\); let us call the resulting experiment \(C_0\).

Let us obtain a new experiment, \(C_1\), by modifying \(C_0\): let \((z,z') = (z_1,z_0)\), while everything else stays the same. \(C_1\) gives the adversary identical view to \(C_0\). Let \(B_1\) be the same as \(B_0\) except for \((z,z') = (z_1,z_0)\); by Lemma 7, the adversary’s view here is identical to \(C_1\). Let \(A_1\) be identical to \(A_0\) except \((z,z') = (z_1,z_0)\); by Lemma 6, it is identical to \(B_1\). Finally, by Lemma 5, \(A_1\) gives the adversary the same view as the challenger when \(b=1\).    \(\square \)

Rephrasing Claims 12 and 3, we get the following two lemmas:

Lemma 8

Let \(E^\mathcal {A}_{ relprime }\) be the event that \(\mathcal {A}\) playing the blindness game with the challenger for the basic version of RSA-BSSA sets \( PK = (N,e)\) such that e is relatively prime to \(\varphi (N)\). Conditioned on \(E^\mathcal {A}_{ relprime }\), \(\mathcal {A}\) receives the same view in the blindness experiment for \(b=0\) as for \(b=1\).

Lemma 9

Let \(E^\mathcal {A}_{ goodsigs }\) be the event that in the blindness game with adversary \(\mathcal {A}\), the challenger for the basic version of RSA-BSSA obtains two signatures that pass verification. Conditioned on \(E^\mathcal {A}_{ goodsigs }\), \(\mathcal {A}\) receives the same view in the blindness experiment for \(b=0\) as for \(b=1\).

When blindness might not hold. Based on the above analysis, the only situation in which \(\mathcal {A}\)’s view may depend on b is when e is not relatively prime to \(\varphi (N)\) and the challenger fails to output two valid signatures. In this situation, \(z_b\) may leak enough information about \(m_b\) that it might be possible to infer whether \(m_b = \textsf{PSSEncode}(\texttt{msg}_0)\) or \(m_b = \textsf{PSSEncode}(\texttt{msg}_1)\), revealing b.

For example, for a prime p such that \(e~|~p-1\), x and y are in the same \(e^ th \) residue class modulo p if there exists \(r \in \mathbb {Z}_p^*\) such that \(x = yr^e \bmod p\). There are e distinct \(e^ th \) residue classes modulo p when \(e~|~p-1\); they correspond to the e values of \(\varPsi _p(x) \bmod e\). Thus, determining \(e^ th \) residue class modulo p of an unknown x provides \(\log e\) bits of information about x.

Suppose \(N = \prod _{i=1}^\ell p_i\) such that \(e~|~p_i-1\) for \(1 \le i \le \ell \), where each \(p_i\) is a distinct prime number. Then \(z_b = m_br^e \bmod p_i\) is the same \(e^ th \) residue class as \(m_b \bmod p_i\). Thus, \(z_b\) reveals \(\ell \log e\) bits of information about \(m_b\). If each \(p_i\) is only slightly larger than e, then z reveals (in the information-theoretic sense) more than half the bits of \(m_b\). It is unclear how these information-theoretic bits correspond to physical bits; therefore, we must consider the worst case, in which they reveal a significant number of bits of the encoded message \(\texttt{EM}\). Especially devastating would be the case when the revealed bits correspond to H and the bits of \(\texttt{maskedDB}\) just to the left of H; by XORing those bits with \(\textsf{MGF}(H, lenDB )\), \(\mathcal {A}\) can recover \(\texttt{salt}\), and check whether \(H = \textsf{Hash}(M')\) where \(M'\) encodes \(\texttt{mHash}= \textsf{Hash}(\texttt{msg}_0, hLen )\) with \(\texttt{salt}\) (which corresponds to \(b=0\)) or \(\texttt{mHash}= \textsf{Hash}(\texttt{msg}_1, hLen )\) with \(\texttt{salt}\) (which corresponds to \(b=1\)).

As we will see below, variants A and B of RSA-BSSA prevent this situation in two distinct ways. Variant A makes it extremely unlikely that e is not relatively prime to \(\varphi (N)\). Variant B ensures that recovering \(\texttt{mHash}\) does not help in checking whether it corresponds to \(\texttt{msg}_0\) or \(\texttt{msg}_1\): any value \(\texttt{mHash}\) is equally likely to correspond to either, depending on the choice of the randomizer \(\texttt{rand}\).

4.2 Blindness of Variants a and B

Theorem 2

RSA-BSSA, Version A, satisfies blindness (Definition 4).

Proof

This follows by the soundness of the proof due to Goldberg et al. [21].   \(\square \)

Theorem 3

RSA-BSSA, Version B, satisfies blindness in the random-oracle model (Definition 4).

Proof

For \(j \in \{0,1\}\), let \(m_j\) be the integer that corresponds to \(\textsf{PSSEncode}(\texttt{msg}_j\circ \texttt{rand}_j)\) for a random string \(\texttt{rand}_j\) of \(\kappa \) bits, and \(z_j = m_jr_j^e\). Let \((z,z') = (z_b,z_{1-b})\) be the values that the challenger sends to the adversary \(\mathcal {A}\) in the blindness experiment with the bit b. In order to see that \((z,z')\) are distributed independently of the bit b it is sufficient to show that \(\texttt{mHash}_b = \textsf{Hash}(\texttt{msg}_b\circ \texttt{rand}_b, hLen )\) is distributed independently of b for a randomly chosen \(\texttt{rand}_b\), since \(\textsf{PSSEncode}\) just feeds its input string to \(\textsf{Hash}\).

Let us model \(\textsf{Hash}\) as a random oracle. Consider a modified blindness experiment in which the challenger also controls the random oracle \(\textsf{Hash}\):

  • \(\mathcal {A}\) is invoked \(\mathcal {A}(1^k)\) selects a public key \( PK = (N,e)\) and \(\texttt{msg}_0\) and \(\texttt{msg}_1\).

  • \(\mathcal {A}\) acts as the blind signer For \(j \in \{0,1\}\), compute \(\texttt{EM}_j = \textsf{PSSEncode}(\texttt{msg}_j,\) \(k-1)\) differently from the blindness challenger, as follows: instead of picking \(\texttt{rand}_j\) first, and then setting \(\texttt{mHash}_j\), leave \(\texttt{rand}_j\) undefined for now and let \(\texttt{mHash}_j\) be a random string of length \(8 hLen \). Next, follow the protocol and let \(m_j = \textsf {OS2IP}(\texttt{EM}_j)\) be the corresponding integer. Next, sample \(r_j \leftarrow \mathbb {Z}_N^*\), compute \(z_j = m_jr_j^e \bmod N\) and make sure \(z_j \in \mathbb {Z}_N^*\). Compute \(\texttt{inv}_j=r_j^{-1} \bmod N\). The challenger sends to \(\mathcal {A}\) the values \(z_b\) and \(z_{1-b}\).

  • \(\mathcal {A}\) receives the signatures Upon receipt of \(u_b\) and \(u_{1-b}\) from the signer, the challenger checks that \(z_j = u_j^e\) for each \(j \in \{0,1\}\); if these fail, send \(\bot \) to \(\mathcal {A}\). If these checks pass, then choose random \(\kappa \)-bit strings \(\texttt{rand}_0\) and \(\texttt{rand}_1\) and set the random oracle so that \(\texttt{mHash}_j = \textsf{Hash}(\texttt{msg}_j\circ \texttt{rand}_j, hLen )\); if setting the random oracle this way fails (i.e., the value \(\textsf{Hash}(\texttt{msg}_j\circ \texttt{rand}_j, hLen )\) is already defined), then this experiment fails. Else, for each \(j \in \{0,1\}\), compute \(s_j = u_j/r_j\) and send \((s_0,s_1)\) to \(\mathcal {A}\).

  • \(\mathcal {A}\) queries \(\textsf{Hash}\) Since in this modified blindness experiment, the challenger controls the random oracle, we must also describe how it handles the adversary’s queries to \(\textsf{Hash}\). As usual, when \(\mathcal {A}\) queries a value \((v,\ell )\) such that \(\textsf{Hash}(v,\ell )\) has not yet been defined, respond with a random string of length \(8\ell \); when querying for a string whose value has already been defined, return that value.

  • \(\mathcal {A}\)’s output At the end of its execution, \(\mathcal {A}\) produces some output. At that point, if \(\texttt{rand}_0\) and \(\texttt{rand}_1\) are still undefined, choose random \(\kappa \)-bit strings \(\texttt{rand}_0\) and \(\texttt{rand}_1\) and set the random oracle so that \(\texttt{mHash}_j = \textsf{Hash}(\texttt{msg}_j\circ \texttt{rand}_j, hLen )\); if setting the random oracle this way fails (i.e., the value \(\textsf{Hash}(\texttt{msg}_j\circ \texttt{rand}_j, hLen )\) is already defined), then this experiment fails.

Our theorem will follow by putting together the following three claims:

Claim 4

Conditioned on the event that the modified blindness experiment does not fail, the view \(\mathcal {A}\) receives in the modified blindness experiment above is independent of the bit b.

Claim 5

Conditioned on the event that the modified blindness experiment does not fail, the view \(\mathcal {A}\) receives in the modified blindness experiment above is identical to the view it receives in the actual blindness experiment.

Claim 6

Let \(\mathcal {A}\)’s running time be t. Then the modified blindness experiment fails with probability \(O(t 2^{-\kappa })\).

To see that the theorem follows from the claims, consider a sequence of experiments: (1) blindness game with \(b=0\); (2) modified blindness game with \(b=0\); (3) modified blindness game with \(b=1\); (4) blindness game with \(b=1\). (1) and (2) are indistinguishable by combining Claims 5 and 6; similarly (3) and (4). (2) and (3) are indistinguishable by combining Claims 4 and 6.

We conclude our proof of the theorem by proving these claims.

Proof of Claim 4. This claim follows by construction. Note that in the step when \(\mathcal {A}\) acts as the blind signer, the challenger does not even need to have b already defined: it can set \(\texttt{mHash}_b\) and \(\texttt{mHash}_{1-b}\) without knowing b and compute \(m_b\) and \(m_{1-b}\) from them; similarly it can sample \(r_b\) and compute \(z_b\). If it needs to know b in the step where \(\mathcal {A}\) receives the signatures, the challenger is already assured that \(m_0\) and \(m_1\) are both in \( Residues _{N,e}\), and so by Lemma 9, \(\mathcal {A}\)’s view is independent of b.

Proof of Claim 5. In the random-oracle model, the only difference between the modified experiment above and the real blindness experiment is the point in time in which the values \(\texttt{rand}_0\) and \(\texttt{rand}_1\) are defined: whether they are already defined when \(\mathcal {A}\) acts as the blind signer, or whether this does not happen until the step where \(\mathcal {A}\) receives the signatures or (in the event it does not) the output step. If we condition on the event that the modified experiment does not fail, then we know that \(\mathcal {A}\) has never over the course of its execution queried \(\textsf{Hash}\) on the values \((\texttt{msg}_0\circ \texttt{rand}_0, hLen )\) and \((\texttt{msg}_1\circ \texttt{rand}_1, hLen )\). In that case, whether \(\texttt{rand}_0\) and \(\texttt{rand}_1\) were already defined or not, is independent of \(\mathcal {A}\)’s view, and therefore the modified blindness experiment is identical to the original one.

Proof of Claim 6. The modified experiment fails if the adversary ever queries \(\textsf{Hash}\) on input \((\texttt{msg}_j\circ \texttt{rand}_j, hLen )\) for \(j \in \{0,1\}\). In t steps, \(\mathcal {A}\) may query at most t such strings. \(\texttt{rand}_j\) is a random \(\kappa \)-bit string, so the probability it’s among the t that \(\mathcal {A}\) has queried, is \(t 2^{-\kappa }\).   \(\square \)

4.3 The Basic Version Is a Blind Token Scheme

Theorem 4

The basic version of RSA-BSSA is a strongly unforgeable blind token scheme (Definition 5) under the one-more-RSA assumption.

Proof

First, note that the basic version of RSA-BSSA satisfies the input-output specification for a two-move blind signature scheme (Definition 1) and the strong one-more unforgeability property (Definition 3). The first follows by inspection; the second, by Theorem 6. Thus, it is sufficient to show that for any \(\mathcal {A}\), the view in the experiment described in Definition 5 is independent of the bit b.

Note that in the blind token security game, unless the challenger obtains two valid signatures, the adversary’s view is independent of b based on how the game unfolds. Thus, an adversary \(\mathcal {A}\) guessing b correctly in that game more often than half the time must be one for whom the challenger obtains two valid signatures. Then consider the following reduction \(\mathcal {B}\) that plays the (usual) blindness game with a blind signature challenger for RSA-BSSA and uses \(\mathcal {A}\) to contradict Lemma 9: it obtains from \(\mathcal {A}\) the values (Ne) and the auxiliary data needed to sample the messages \(\texttt{msg}_0\) and \(\texttt{msg}_1\) and proceeds to sample them. Then it sends (Ne) and \(\texttt{msg}_0\) and \(\texttt{msg}_1\) to its challenger, and from then on, it passes messages back and forth from its challenger to \(\mathcal {A}\), and outputs whatever \(\mathcal {A}\) outputs. If \(\mathcal {A}\) is successful, then \(\mathcal {B}\) is successful. But \(\mathcal {A}\) can only be successful (as we observed above) when the challenger outputs two valid signatures, and by Lemma 9, under these circumstances \(\mathcal {B}\) cannot be successful, which is a contradiction.   \(\square \)

4.4 Blindness of Chaum-RSA-FDH

Consider Bellare et al. [6]’s version of Chaum blind signature; we will call it Chaum-RSA-FDH from now on. Chaum-RSA-FDH works as follows: Following RSA-FDH, the key generation algorithm generates an RSA key pair \( PK =(N,e)\), \( SK = d\), where \(ed \equiv 1 \bmod \varphi (N)\). Following Chaum, in order to obtain a blind signature on a message M, the user first blinds it using a random \(r \leftarrow \mathbb {Z}_N^*\) obtaining \(z = \textsf{Hash}(M)r^e \bmod N\). Then he sends z to the signer and gets back the blinded signature \(y=z^d \bmod N\), and unblinds it to obtain and output \(s = yr^{-1}\bmod N\). The resulting value passes RSA-FDH verification: \(s^e = y^er^{-e} = zr^{-e} = \textsf{Hash}(M) r^er^{-e} = \textsf{Hash}(M)\).

Let (Ne) be such that e is not relatively prime to \(\varphi (N)\). Let \(U = \{u\mid u^e \equiv 1 \bmod N\}\); by Lemma 4, when e divides some prime factors of N, \(|U| \ge e\). Let \(\equiv _e\) be the following equivalence relation: \(a \equiv _e b\) if there exist \(\alpha \), \(\beta \) and \(u \in U\) such that \(a = \alpha ^e u \bmod N\) and \(b = \beta ^e u \bmod N\). It is easy to see that \(\equiv _e\) partitions \(\mathbb {Z}_N^*\) into |U| equivalence classes. There is an efficient algorithm that, on input the factorization of N, a and b, determines whether \(a \equiv _e b\). Moreover, for any \(a,r \in \mathbb {Z}_N^*\), \(a \equiv _e ar^e\).

In order to break blindness of Chaum-RSA-FDH, the adversary picks (Ne) such that it knows the factorization of N, and such that \(e\mid \varphi (N)\). Next, it picks two messages \(M_0\) and \(M_1\) to send to the challenger, such that \(1 \not \equiv _e \textsf{Hash}(M_0) \not \equiv _e \textsf{Hash}(M_1) \not \equiv _e 1\). The challenger computes \(z_0 = \textsf{Hash}(M_0)r_0^e\) and \(z_1 = \textsf{Hash}(M_1)r_1^e\), and sends them to the adversary in random order: \((z_b,z_{1-b})\). In order to determine the bit b, the adversary checks whether \(z_b \equiv _e \textsf{Hash}(M_0)\); if so, it returns 0, else it returns 1.

Bibliographic note. Before the community settled on what is now considered to be the right definition of blindness [1], the definition due to Juels, Ostrovsky and Luby [25] was the standard one. That definition’s security experiment for blindness did not envision that the adversarial signer may generate the signing key in a malicious way, rather than following the key generation algorithm. Bellare, Namprempre, Pointcheval and Semanko showed that the Chaum-RSA-FDH scheme was a secure blind signature under the old definition [25]. As we saw, their result does not hold under the more modern definition of blindness that came several years after their paper came out. Fortunately, Chaum-RSA-FDH is a strongly one-more-unforgeable blind token scheme, i.e., it satisfies Definition 5.

Theorem 5

The Chaum-RSA-FDH scheme described in Sect. 4.4 is a strongly one-more-unforgeable blind token scheme for any efficiently samplable message space \(\mathcal {M}\).

Proof

(Sketch) It is easy to see that the scheme satisfies the input-output structure and the correctness requirements. As for strong one-more unforgeability: Bellare, Namprempre, Pointcheval and Semanko [6] showed that it was one-more-unforgeable under the one-more-RSA assumption. Strong one-more unforgeability follows because RSA-FDH is deterministic, i.e., there is a unique signature corresponding to each message. Thus we just need to show that for any \(\mathcal {A}\), \(\mathcal {A}\)’s advantage in the blind token experiment described in Definition 5 is negligible; in fact we will see that it is 0.

Let \(\mathcal {A}\) be an adversary playing the blind token game; let us consider the view \(\mathcal {A}\) receives given a fixed \(b \in \{0,1\}\). When it is first invoked (step 1), it produces \( PK = (N,e)\) and some string \( aux \). Next (step 2), \(\texttt{msg}_0\) and \(\texttt{msg}_1\) are selected by the challenger by running \(\mathcal {M}(1^k, PK , aux )\); let \(x_0 = \textsf{Hash}(\texttt{msg}_0)\) and \(x_1 = \textsf{Hash}(\texttt{msg}_1)\). Let \(r_0\) and \(r_1\) be sampled at random from \(\mathbb {Z}_N^*\), and let \(z_0 = x_0r_0^e\) and \(z_1 = x_1r_1^e\) be the blinded messages the challenger sends to \(\mathcal {A}\) (step 3), and let \(s_0\) and \(s_1\) be the values \(\mathcal {A}\) sends in return — the order in which they are sent depends on b (step 4). Next (step 5) if \(s_0^e = z_0\) and \(s_1^e = z_1\), the challenger computes \(\sigma _0 = s_0/r_0\) and \(\sigma _1/r_1\) and sends to the adversary the values \((\texttt{msg}_0,\sigma _0)\) and \((\texttt{msg}_1,\sigma _1)\), else it sends it \(\bot \).

Consider an alternative pair of experiments for \(b \in \{0,1\}\); here the challenger is computationally unbounded. The challenger begins by selecting \(\texttt{msg}_0\) and \(\texttt{msg}_1\) from \(\mathcal {M}(1^k, PK , aux )\). We have two cases: Case A, in which there exist \((\sigma _0,\sigma _1)\) such that \(\sigma _0^e = \textsf{Hash}(\texttt{msg}_0)\) and \((\sigma _1)^e = \textsf{Hash}(\texttt{msg}_1)\); and Case B, in which the pair exist \((\sigma _0,\sigma _1)\) does not exist. Since this challenger is unbounded, it identifies which case it is in, and acts as follows:

In Case A, in Step 2, the challenger picks z and \(z^*\) uniformly at random from \(\mathbb {Z}_N^*\) and sends \((z,z^*)\) to \(\mathcal {A}\). It receives \((s,s^*)\). In step 5, if \(s^e \ne z\) or \((s^*)^e \ne (z^*)^e\), then it sends \(\bot \) to \(\mathcal {A}\). Else, it samples valid signatures \(\sigma _0\) and \(\sigma _1\) for \(\texttt{msg}_0\) and \(\texttt{msg}_1\), respectively, and sends to the adversary \(\mathcal {A}\) the values \((\texttt{msg}_0,\sigma _0)\) and \((\texttt{msg}_1,\sigma _1)\).

In Case B, the challenger follows the protocol.

It is easy to see that, in the alternative experiment, the adversary’s view is independent of b. To see that the alternative experiment gives \(\mathcal {A}\) a view that’s identical to the blind token game in Case A, note that the challenger choosing \(r_0 = s/\sigma _0\) and \(r_1 = s^*/\sigma _1\) in step 3 corresponds to having \(b = 0\) in the blind token game, while choosing \(r_0 = s^*/\sigma _0\) and \(r_1 = s/\sigma _1\) corresponds to \(b=1\). Since \(r_0\) and \(r_1\) are chosen uniformly at random, the two options are equally likely. In Case B, since one or both signatures don’t exist, the adversary’s view is independent of b as well, since the pair of messages \((\texttt{msg}_0,\texttt{msg}_1)\) is just as likely as \((\texttt{msg}_1,\texttt{msg}_2)\).    \(\square \)

5 Unforgeability of RSA-BSSA

Recall that an algorithm \(\mathcal {A}\) is said to break the security of a cryptographic scheme in the random-oracle model [7] if its success probability is non-negligible when a specific component of the scheme, typically a hash function, is replaced by a random oracle. Security in the random-oracle model means that no polynomial-time algorithm can break the scheme in the random-oracle model.

A proof of security in the random-oracle model does not, in fact, imply a proof of security in the plain model (i.e. where no component of the scheme is modeled as a random oracle) [12]. However, it is considered evidence of security that’s good enough in practice.

In a random-oracle-based reduction, the reduction is typically privy to all the hash function queries the adversary issues. Another privilege that such a reduction has (in the standard, so-called “programmable” random-oracle model — these different flavors are explored by Fischlin et al. [18]) is that it can answer such a query with any value it desires. Since the adversary expects the answers to its queries to be truly random, as long as the reduction’s responses are distributed at random (or are indistinguishable from random), the adversary’s success probability will be as high when interacting with the reduction as when attacking the scheme.

We will prove strong one-more unforgeability of RSA-BSSA in the random-oracle model under the one-more-RSA assumption introduced by Bellare, Namprempre, Pointcheval and Semanko [6]. They also showed that the one-more-RSA assumption (stated formally in Appendix A) holds if an only if the following problem, called the alternative chosen-target RSA inversion (RSA-ACTI) problem, is hard:

Definition 7

(RSA-ACTI [6]). Let \(\mathcal {A}\) be an oracle Turing machine. For the security parameter k, let the experiment \(\textbf{Exp}^{ rsa-acti }_{\mathcal {A}}(k)\) be defined as follows:

  • RSA key pair generation The challenger generates an RSA public key (Ne) and secret key d corresponding to the security parameter k. Let us define the following oracles:

    1. 1.

      The RSA inversion oracle \(\mathcal {O}_I(\cdot ,N,d)\) that, on input \(y \in \mathbb {Z}_N^*\), returns x such that \(x^e = y \bmod N\); i.e., it returns \(y^d \bmod N\) where \(ed \equiv 1 \bmod \varphi (N)\).

    2. 2.

      An oracle \(\mathcal {O}_R(\cdot ,N)\) that, when queried, issues a random RSA inversion challenge point, i.e. a random element of \(\mathbb {Z}_N^*\). By \(y_i\), let us denote the outcome of the \(i^ th \) such query.

  • \(\mathcal {A}\) is invoked The challenger invokes \(\mathcal {A}^{\mathcal {O}_I(\cdot ,N,d),\mathcal {O}_R(\cdot ,N)}(N,e)\) and responds to its oracle queries. Eventually, \(\mathcal {A}\) terminates.

  • \(\mathcal {A}\)’s success criterion Let \(\ell \) be the number of queries \(\mathcal {A}\) issued to \(\mathcal {O}_I(\cdot ,N,d)\). Let \((y_1,\ldots ,y_n)\) be the values \(\mathcal {A}\) received from \(\mathcal {O}_R(\cdot ,N)\). Let \((z_1,\ldots ,z_n)\) be \(\mathcal {A}\)’s output. For \(1\le i \le n\), \(z_i\) is correct if \(z_i^e = y_i \bmod N\). \(\mathcal {A}\) is successful if \(|\{i~:~z_i \text{ is } \text{ correct }\}| \ge \ell +1\).

By \(\textbf{Adv}^{ rsa-acti }_{\mathcal {A}}(k)\) we denote the probability that \(\mathcal {A}\) is successful in \(\textbf{Exp}^{ rsa-acti }_{\mathcal {A}}(k)\). The RSA-ACTI problem is hard if for any probabilistic polynomial-time \(\mathcal {A}\), \(\textbf{Adv}^{ rsa-acti }_{\mathcal {A}}(k)\) is negligible.

Theorem 6

Let \(\mathcal {A}\) be an algorithm that breaks strong one-more unforgeability of the basic RSA-BSSA scheme (Definition 3) where both \(\textsf{Hash}(\cdot ,\ell )\) and \(\textsf{MGF}(\cdot ,\ell )\) are random oracles for every integer \(\ell \). Let \(t_\mathcal {A}(k)\) be an upper bound on its running time; let \(p_\mathcal {A}(k)\) be its success probability.

Then there exists an algorithm \(\mathcal {B}\) that solves the RSA-ACTI problem (Definition 7) in \(t^\mathcal {B}(k) = O( poly (k) + t^\mathcal {A}(k))\) time with probability \(p_\mathcal {B}(k) = p_\mathcal {A}(k)-\varTheta (t_\mathcal {A}^2(k)2^{-8 hLen })\).

This theorem, i.e. the unforgeability of the basic RSA-BSSA scheme, implies unforgeability of variants A and B. For variant A, the additional proof that’s part of the public key can be simulated in the random-oracle model as shown by Goldberg et al. [21]. For variant B, unforgeability follows from that of the basic scheme, since a signature in variant B on message \(\texttt{msg}\) with randomness \(\texttt{rand}\) is also a signature in the basic scheme on message \(\texttt{msg}\circ \texttt{rand}\).

Corollary 3

Let \(\mathcal {A}\) be an algorithm that breaks strong one-more unforgeability of RSA-BSSA variants A or B (Definition 3) where both \(\textsf{Hash}(\cdot ,\ell )\) and \(\textsf{MGF}(\cdot ,\ell )\) are random oracles for every integer \(\ell \). Let \(t_\mathcal {A}(k)\) be an upper bound on its running time; let \(p_\mathcal {A}(k)\) be its success probability.

Then there exists an algorithm \(\mathcal {B}\) that solves the RSA-ACTI problem (Definition 7) in \(t^\mathcal {B}(k) = O( poly (k) + t^\mathcal {A}(k))\) time with probability \(p_\mathcal {B}(k) = p_\mathcal {A}(k)-\varTheta (t_\mathcal {A}^2(k)2^{-8 hLen })\).

Proof

(of Theorem 6) Let \(q_\textsf{Hash}^\mathcal {A}( PK ;R)\), \(q_{\textsf{MGF}}^\mathcal {A}( PK ;R)\) and \(q^\mathcal {A}_\textsf{BSig}( PK ;R)\) be the number of queries \(\mathcal {A}\) makes to \(\textsf{Hash}\), \(\textsf{MGF}\) and \(\textsf{BSig}\) respectively when interacting with it challenger on input a specific public key \( PK \); R denotes the randomness of the experiment (i.e., both the random tape of \(\mathcal {A}\) and that of the challenger). When \( PK \) and R are clear from context, we will write \(q_\textsf{Hash}^\mathcal {A}\), \(q_{\textsf{MGF}}^\mathcal {A}\) and \(q^\mathcal {A}_\textsf{BSig}\).

Without loss of generality, let us assume that \(\mathcal {A}\)’s output is either empty (i.e., \(\mathcal {A}\) fails to win the game) or consists solely of \(q^\mathcal {A}_\textsf{BSig}+1\) message-signature pairs that pass verification. Let us call such an \(\mathcal {A}\) a “high-achieving adversary" in the sequel. The reason we can assume that \(\mathcal {A}\) is high-achieving is that, if it’s not, we could modify \(\mathcal {A}\) into an algorithm \(\mathcal {A}'\) that verifies \(\mathcal {A}\)’s output and, if \(\mathcal {A}\) succeeded, outputs the first \(q^\mathcal {A}_\textsf{BSig}+1\) pairs that pass verification. By definition of one-more unforgeability, \(\mathcal {A}'\) succeeds with the same probability as \(\mathcal {A}\), and has a comparable running time.

We will construct the reduction \(\mathcal {B}\) that will use a high-achieving \(\mathcal {A}\) as a subroutine.

Input to the reduction. The reduction plays the role of the attacker in experiment \(\textbf{Exp}^{ rsa-acti }_{\mathcal {B}}(k)\). Thus, it takes as input the RSA public key (Ne) that had been generated by RSA’s key generation on input the security parameter k.

The oracle the reduction may use. As described in Definition 7, \(\mathcal {B}\) has access to two oracles:

  1. 1.

    The RSA inversion oracle \(\mathcal {O}_I(\cdot ,N,d)\) that, on input \(y \in \mathbb {Z}_N^*\), returns x such that \(x^e = y \bmod N\); i.e., it returns \(y^d \bmod N\) where \(ed \equiv 1 \bmod \varphi (N)\).

  2. 2.

    An oracle \(\mathcal {O}_R(\cdot ,N)\) that, when queried, issues a random RSA inversion challenge point, i.e. a random element of \(\mathbb {Z}_N^*\). By \(y_i\), let us denote the outcome of the \(i^ th \) such query.

How the reduction interacts with \(\mathcal {A}\) . The adversary \(\mathcal {A}\) is attacking the strong one-more unforgeability property of the RSA-BSSA scheme as described in Definition 3. Thus, \(\mathcal {A}\) will need to receive \( PK \) as input; the reduction sets \( PK = (N,e)\), where (Ne) is its own input. Since the reduction is in the random-oracle model, \(\mathcal {A}\) will expect oracle access to \(\textsf{Hash}\) and \(\textsf{MGF}\), which \(\mathcal {B}\) will respond to as described below. \(\mathcal {A}\) will also engage with the signer in the blind signing protocol; in the case of RSA-BSSA, this will involve oracle access to \(\textsf{BSig}( SK ,\cdot )\); below, we describe how \(\mathcal {B}\) will handle this as well. Finally, \(\mathcal {A}\) terminates and produces some output; below, we describe how \(\mathcal {B}\) uses \(\mathcal {A}\)’s output to compute a solution to the RSA-ACTI problem.

How the reduction will handle \(\mathcal {A}\)’s queries to \(\textsf{Hash}(\cdot ,\cdot )\) . A relevant query to \(\textsf{Hash}(\cdot ,\cdot )\) is \((v,\ell )\) such that \(\ell = hLen \) and the first eight bytes of v are all 0.

Let \((v,\ell )\) be a query that is not relevant. A value v that is derived as part of signature verification must begin with 64 0s; if v is not of that form, we know that we will never encounter the need to calculate \(\textsf{Hash}(v, hLen )\) as part of verifying a signature. (A detailed description of the signature verification algorithm provided in Appendix B clarifies this point; see Step 6.) We also know that for any length \(\ell \ne hLen \), \(\textsf{Hash}(v,\ell )\) is not computed as part of signature verification. Thus, there is no need to prepare a response to this query in any special way. In response to \((v,\ell )\), the reduction returns a randomly sampled string h of \(\ell \) bytes and stores \(((v,\ell ),h)\) for future reference.

In contrast, the response to the \(i^{ th }\) relevant query is set up in such a way that, should that query be part of the successful verification of one of the signatures returned by the adversary, it should allow the reduction to invert RSA at a challenge point \(y_i\).

More precisely: Let the \(i^{ th }\) relevant query to Hash be the pair \((v_i, hLen )\). Parse \(v_i\) as follows: \(v_i= 0^{64} \circ \texttt{mHash}_i \circ \texttt{salt}_i\). Implicitly, \(\texttt{mHash}_i\) is computed from some unknown \(M_i\), and \(\texttt{salt}_i\) are the last \( lenSalt \) bits of \(v_i\).

Our goal is to ensure that, if the adversary ever returns \((M,\sigma )\) that passes the verification algorithm such that \(\texttt{mHash}= \texttt{mHash}_i\), and \(\texttt{salt}= \texttt{salt}_i\), then \(\sigma ^e = y_i r_i^e \bmod N\) for some challenge point \(y_i\) and a value \(r_i\) known to the reduction. Then the reduction can invert RSA at \(y_i\) by outputting \(\sigma /r_i\).

First, the reduction obtains the challenge \(y_i\) by querying \(\mathcal {O}_R(\cdot ,N)\). Next, it samples from \(\mathbb {Z}_N^*\) until it finds a value \(r_i\) such that, for \(w_i = y_i r_i^e\), \(w_i < 2^{k-1}\) (i.e. \(k-1\) bits are sufficient to encode it) and the binary representation of \(w_i\) ends in the byte 0xBC. In expectation, it will take between 256 and 512 attempts to find such \(r_i\), depending on how close N is to \(2^k\): at least half the time, \(w_i < 2^{k-1}\), and conditioned on that, it starts with 0xBC one in every 256 tries.

Next, execute the following steps that determine how to fix \(\textsf{MGF}\) in one point, and what value to return in response to this query. The goal is to ensure that, if \((M,\sigma )\) is a message-signature pair accepted by the verification algorithm, and \(v_i = 0^{64}\circ \texttt{mHash}\circ \texttt{salt}\) is the input to \(\textsf{Hash}\) computed as part of verification (i.e. it is the value \(M'\) computed in Step 6 and queried in Step 7 of the detailed verification algorithm in Appendix B), then \(\sigma ^e \bmod N = w_i\).

  1. 1.

    Set \(\texttt{EM}_i = \textsf {I2OSP}(w_i, emLen )\) (recall that \( emLen = \lceil (k-1)/8 \rceil \)).

  2. 2.

    Parse \(\texttt{EM}_i = \texttt{maskedDB}_i \circ H_i \circ 0xBC\) (as described in Step 2 of the verification procedure in Appendix B). Since the reduction used the sampling procedure above to obtain \(w_i\), \(\texttt{EM}_i\) ends in the byte 0xBC.

  3. 3.

    In order to ensure that \(\texttt{DB}_i\) that will be computed in Step 4 (of the detailed verification procedure) contains the same salt value as \(v_i\), carry out the following steps:

    • Let \(\texttt{mHash}_i\) and \(\texttt{salt}_i\) be the strings of \( hLen \) and \( sLen \) bytes, respectively, such that \(v_i = 0^{64}\circ \texttt{mHash}_i \circ \texttt{salt}_i\).

    • Let \(\texttt{DB}_i = 0^a \circ 0\,\times \,01\circ \texttt{salt}_i\), where \(a = 8( lenDB -1- sLen )\).

    • Let \(\texttt{dbMask}'_i = \texttt{DB}_i \oplus \texttt{maskedDB}_i\). Note that \(\texttt{dbMask}'_i\) must start with \(0^p\), since \(\texttt{DB}_i\) starts with 0s, and the fact that \(\texttt{maskedDB}\) is the beginning of the string output by \(\textsf {I2OSP}\) ensures that it begin with p 0s. Let \(\texttt{dbMask}_i\) be the result of replacing the first p bits of \(\texttt{dbMask}'_i\) with random bits.

    • If \(\textsf{MGF}(H_i, lenDB )\) is already defined, fail. Else, set it to the value \(\texttt{dbMask}_i\).

  4. 4.

    Set \(\textsf{Hash}(v_i, hLen ) = H_i\), save \((i,v_i,w_i,r_i,H_i)\) for future reference.

Return \(H_i\).

How the reduction will handle \(\mathcal {A}\)’s queries to \(\textsf{MGF}(\cdot ,\cdot )\) . Let u be a query to \(\textsf{MGF}\). Case 1: \(\textsf{MGF}(u,\ell )\) is already fixed as a result of a previous query to \(\textsf{Hash}\) or \(\textsf{MGF}\); then return the value \(\textsf{MGF}(u,\ell )\). Case 2: \(\textsf{MGF}(u,\ell )\) is not yet fixed; then return a random string of \(\ell \) octets.

How the reduction will handle \(\mathcal {A}\)’s queries to \(\textsf{BSig}( SK ,\cdot )\) . Once \(\mathtt {blinded\_msg}\) from the adversary \(\mathcal {A}\) is received, compute \(m = \textsf{OS2IP}(\mathtt {blinded\_msg})\). If \(m \ge N\), then it fails. Otherwise, it sends m to its RSA inversion oracle \(\mathcal {O}_I(\cdot ,N,d)\). In turn, \(\mathcal {O}_I(m,N,d)\) returns s such that \(s^e = m \bmod N\). \(\mathcal {B}\) computes \(\mathtt {blinded\_sig}= \textsf {I2OSP}(s, kLen )\) and returns it to \(\mathcal {A}\).

How the reduction will process \(\mathcal {A}\)’s output. At the end of its execution, the adversary \(\mathcal {A}\) outputs a set of message-signature pairs \(\{(M_j,\sigma _j)\}\). First, \(\mathcal {B}\) verifies these message-signature pairs, as follows: it runs the verification algorithm as described in Appendix B. When the verification algorithm queries \(\textsf{Hash}\) and \(\textsf{MGF}\) for a value previously queried by \(\mathcal {A}\), \(\textsf{Hash}\) and \(\textsf{MGF}\) return the same string as was returned to \(\mathcal {A}\). When the verification algorithm queries \(\textsf{Hash}\) and \(\textsf{MGF}\) for a value not previously queried by \(\mathcal {A}\), \(\textsf{Hash}\) and \(\textsf{MGF}\) return random strings.

Next, \(\mathcal {B}\) fails if some message-signature pair \((M_j,\sigma _j)\) is accepted by the verification algorithm, and yet \(\mathcal {A}\) had not made the queries to \(\textsf{Hash}\) and \(\textsf{MGF}\) that the verification algorithm just made when verifying \((\sigma _j,M_j)\).

Else, \(\mathcal {B}\) proceeds as follows. Recall that n is the number of queries that \(\mathcal {B}\) has made to its challenge oracle, i.e. the number of challenge points \(y_j\) that \(\mathcal {B}\) has received. For \(1 \le i \le n\), initialize \(z_i = \bot \). Next, for each j such that \((M_j,\sigma _j)\) is accepted by the verification algorithm, find i such that \(\sigma _j^e \bmod N \equiv w_i\). (If no such i exists, fail.) Then \(\sigma _j^e = y_i r_i^e \bmod N\), so \(y_i = (\sigma _j/r_i)^e\) and so \(\mathcal {B}\) has inverted RSA at \(y_i\); set \(z_i = \sigma _j/r_i\).

\(\mathcal {B}\) outputs \(z_1,\ldots ,z_n\).

Analysis of the reduction. To conclude our proof of security, we need to prove the following three claims. First, we show that an adversary that wins the strong one-more-forgery game against RSA-BSSA must (other than with very small probability) query \(\textsf{MGF}\) and \(\textsf{Hash}\) for all the values that will be queried over the course of the verification of its signatures. This will allow us to assume that we are dealing with the adversary that always makes these queries prior to outputting its signatures and makes sure that verification accepts; we will call such an adversary a “make-sure” adversary. More formally:

Claim 7

Let \(\mathcal {A}\) be a high-achieving adversary that wins the strong one-more-forgery game against RSA-BSSA in the random-oracle model with probability \(p_\mathcal {A}(k)\). Let E be the event that \(\mathcal {A}\) wins the game and for each of the message-signature pairs it produced, it issued the queries to both \(\textsf{MGF}(\cdot , lenDB )\) and \(\textsf{Hash}(\cdot , hLen )\) needed for verifying the message-signature pair some time during the course of its execution. Then \(\Pr [E] \ge p_\mathcal {A}(k) - \varTheta (t_\mathcal {A}^2(k)2^{-8 hLen })\).

Next, we show that for a “make-sure” \(\mathcal {A}\), our reduction succeeds. This is done in two steps: (1) showing that the view that the reduction provides for \(\mathcal {A}\) is identical to its view in the one-more-forgery game in the random-oracle model; (2) showing that whenever a “make-sure” \(\mathcal {A}\) is successful, the reduction succeeds in the RSA-ACTI game. More formally:

Claim 8

Let \(\mathcal {A}\) be any adversary. In the random-oracle model, the view that \(\mathcal {A}\) receives when interacting with the strong one-more unforgeability game challenger for RSA-BSSA is identical to the one \(\mathcal {A}\) obtains in an interaction with the reduction \(\mathcal {B}\) whenever \(\mathcal {B}\) does not fail while answering \(\mathcal {A}\)’s queries. Moreover, the probability that \(\mathcal {B}\) fails while answering a query from \(\mathcal {A}\) is \(O(t^2(k)2^{-8 hLen })\).

Claim 9

Let \(\mathcal {A}\) be an adversary that wins the strong one-more unforgeability game against RSA-BSSA in the random-oracle model with probability \(p_\mathcal {A}(k)\). Then \(\mathcal {B}\) wins the RSA-ACTI game with probability \(p_\mathcal {A}(k)-\varTheta (t_\mathcal {A}^2(k)2^{-8 hLen })\).

The proofs of Claims 78 and 9 are in the full version of this paper [26].   \(\square \)