1 Introduction

Threshold signatures, which originated in the late 1980s [17, 18], are seeing renewed attention, driven in particular by an interest in using them to secure digital wallets in the cryptocurrencies ecosystem [22]. Parallel IETF [32] and NIST [35] standardization efforts are evidence as to the speed at which the area is moving into practice.

Whether securing a user’s digital wallet, or being used by a CA to create a certificate, forgery of a digital signature is costly. The rising tide of system breaches and phishing attacks makes exposure of a signing key too plausible to ignore. The idea of a threshold signature scheme is to distribute the secret signing key across multiple parties who then interact to produce a signature, the intent being to retain security even in the face of compromise of up to a threshold number of these parties. Over the years, threshold versions of many schemes have been presented, including RSA [16, 26, 37], DSA/ECDSA [9, 13, 21,22,23, 25, 34], Schnorr signatures [24, 30, 39] and BLS signatures [8].

Today, we see interest converging on schemes that are non-interactive. The representative examples are BLS [8, 12], and FROST [30]. FROST is a partially non-interactive threshold signature scheme, consisting of a message-independent pre-processing round and one round of signing. Threshold BLS is fully non-interactive, i.e., consists of a single round, but it does require pairings.

We advance the area of non-interactive threshold signature schemes via the following contributions.

1. Framework and Stronger Security. We contend that schemes like FROST and BLS are better than advertised, meeting definitions of security that are stronger than ones that have been previously defined, or that these schemes have been shown to meet in existing literature. Furthermore, these definitions capture natural strengths of the schemes that may be valuable for applications.

The classical development paradigm in theoretical cryptography is to ask what security we would like, define it, and then seek schemes that meet it. Yet if we look back, there has been another path alongside: canonical, reference schemes guided a choice of definitions that modeled them, and, once made, these definitions went on to be influential targets for future schemes. (The formal definition of trapdoor permutations [27], for example, was crafted to model RSA). We are inspired by the latter path. BLS [11] yields a threshold scheme [8] so natural and simple that it is hard to not see it as canonical, and, within the space of Schnorr threshold schemes, FROST [30] has a similarly appealing minimality. Examining them, we see strengths not captured by current definitions or results. We step back to create corresponding abstractions, including a unified syntax and a hierarchy of definitions of security for non-interactive threshold signature schemes. We then return to ask where in this hierarchy we can fit the starting schemes, giving proofs that fit BLS and FROST as high as possible. The proofs this requires, and that we provide, turn out to be challenging and technically interesting.

Although inspired by specific schemes, our definitional development, once begun, unfolds in a logical way, and yields definitions that go beyond even what BLS and FROST achieve. These make intriguing new targets. We show how to achieve them, with minimal modifications to the existing schemes.

2. \(\textsf{FROST}2\) and its Security with DKG. We introduce \(\textsf{FROST}2\), a variant of the original FROST scheme (we hereafter refer to the original as \(\textsf{FROST}1\)) that reduces the number of exponentiations required for signing and verification from linear in the number of signers to constant. We analyze the security of \(\textsf{FROST}2\) in our above security framework, and highlight subtle differences between it and \(\textsf{FROST}1\).

The above-discussed results are all in a setting with (ideal) trusted key generation. In practice however it is desirable that key generation itself be done via a threshold, distributed key generation protocol (DKG). Accordingly, we prove the security of \(\textsf{FROST}2\) with a DKG, namely an efficient variant of Pedersen’s DKG (\(\textsf{PedPoP}\)) introduced in conjunction with \(\textsf{FROST}1\) [30]. Unlike prior proofs that modeled key generation using Pedersen’s DKG [24], our security proof allows concurrent executions of the signing protocol once key generation has completed, and generalizes which honest parties are assumed to participate. We demonstrate that \(\textsf{FROST}2\) instantiated with \(\textsf{PedPoP}\) is secure in the random oracle model (ROM) [5] assuming extractable proofs of possession and the one-more discrete logarithm (OMDL) assumption of [2]. The assumption of extractable proofs of possession is required only for the simulation of \(\textsf{PedPoP}\). Indeed, our proofs for \(\textsf{FROST}1\) and \(\textsf{FROST}2\) without ideal key generation only rely on the OMDL assumption, along with random oracles.

Our proofs here fill a gap towards demonstrating security with respect to well-understood assumptions. We have a complete implementation of our security proof in pythonFootnote 1 in which we see that our reduction accurately outputs a valid OMDL solution and that our simulated outputs pass verification.

We consider schemes where the signing operations involve a leader and a set of \(\textsf{ns}\) nodes, which we refer to as servers, with server i holding a secret share \(\textit{sk}_i\) of the secret signing key \(\textit{sk}\). Signing is done via an interactive protocol that begins with a leader request to some set of at least \(t\) number of servers and culminates with the leader holding the signature, where \(t\le \textsf{ns}\), the threshold, is a protocol parameter.

In a fully non-interactive threshold signature scheme, this protocol is a simple, one-round one. The leader sends a leader request \( lr \), which specifies a message M and possibly other things, to any server i and obtains in response a partial signature, \( psig _i\), that i computes as a function of \(\textit{sk}_i\) and M. The leader can request partial signatures asynchronously, at any time, and independently for each server, and there is no server-to-server communication. Once it has enough partial signatures, the leader aggregates them into a signature \( sig \) of M under the verification key \(\textit{vk}\) corresponding to \(\textit{sk}\). The canonical example is the threshold BLS scheme [8, 12], where \(\textit{sk},\textit{sk}_1,\ldots ,\textit{sk}_{\textsf{ns}}\in {{\mathbb Z}}_p\) for a public prime p, and \( psig _i \leftarrow \textsf{h}(M)^{\textit{sk}_i}\) where \(\textsf{h}{\,:\,}\{0,1\}^*\rightarrow \mathbb {G}\) is a public hash function with range a group \(\mathbb {G}\) of order p. Aggregation produces \( sig \) as a weighted product of the partial signatures.

A partially non-interactive threshold signature scheme adds to the above a message-independent pre-processing round in which, pinged by the leader at any point, a server i returns a pre-processing token \( pp _i\). The leader’s request for partial signatures will now depend on tokens it has received. The canonical example is FROST [30]. This understanding of a non-interactive scheme encompasses what FROST calls flexibility: obtaining \( psig _i\) from any \(\ge t\) servers allows reconstruction of the signature.

For a regular (non-threshold) signature scheme, the first and most basic notion of security is un-forgeability (UF) [27]. The adversary (given access to a signing oracle) outputs a forgery consisting of a message M and a valid signature for it. To win, the forgery must be non-trivial, meaning not obtained legitimately. This is naturally captured, in this context, as meaning that M was not a signing query.

Turning to define un-forgeability for a non-interactive threshold signature scheme, we assume the adversary has corrupted the leader and up to \(t-1\) servers, where \(1\le t \le \textsf{ns}\) is the threshold. Furthermore, it has access to the honest servers. Again, it outputs a forgery consisting of a message M and valid signature for it, and, to win, the forgery must be non-trivial, meaning not obtained legitimately. Deciding what “non-trivial” means, however, is now a good deal more delicate, and interesting, than it was for regular signatures.

In this regard, we suggest that many prior works have set a low bar, being more generous than necessary in declaring a forgery trivial, leading to definitions that are weaker than one can desire, and weaker even than what their own schemes seem to meet. The definitions we formulate rectify this by considering five non-triviality conditions of increasing stringency, yielding a corresponding hierarchy \(\text {TS-UF-}0\leftarrow \text {TS-UF-}1\leftarrow \text {TS-UF-}2\leftarrow \text {TS-UF-}3\leftarrow \text {TS-UF-}4\) of notions of un-forgeability of increasing strength. (Here an arrow \(\textrm{B}\leftarrow \textrm{A}\) means \(\textrm{A}\) implies \(\textrm{B}\): any scheme that is \(\textrm{A}\)-secure is also \(\textrm{B}\)-secure). \(\text {TS-UF-}0\), the lowest in the hierarchy, is the notion currently in the literature.

Returning to regular (non-threshold) signature schemes, strong un-forgeability (SUF) has the same template as UF, but makes the non-triviality condition more strict, asking that there has been no signing query M that returned \( sig \). We ask if SUF has any analogue in the threshold setting. For non-interactive schemes, we suggest it does and give a hierarchy of three definitions of strong unforgeability \(\mathrm {TS\hbox {-}SUF}\hbox {-}2\leftarrow \mathrm {TS\hbox {-}SUF}\hbox {-}3\leftarrow \mathrm {TS\hbox {-}SUF}\hbox {-}4\). The numbering reflects that \(\text {TS-UF-}i\leftarrow \mathrm {TS\hbox {-}SUF}\hbox {-}i\) for \(i=2,3,4\).

Boldyreva’s analysis of threshold BLS [8] adopts the formalism of Gennaro, Jarecki, Krawczyk, and Rabin [23, 25, 26]. The non-triviality condition here is that no server was asked to issue a partial signature on the forgery message M. This is \(\text {TS-UF-}0\) in our hierarchy. But allowing asynchronous requests is a feature of this scheme and model. A corrupted leader could ask one honest server i for a partial signature. No other server would even be aware of this request, but the adversary would now have \( psig _i\). Under \(\text {TS-UF-}0\), the forgery is now trivial, and the adversary does not win. Yet (assuming a threshold \(t\ge 2\)), there is no reason possession of just \( psig _i\) should allow creation of a signature, and indeed for threshold BLS there is no attack that seems able to create such a signature, indicating the scheme is achieving more than \(\text {TS-UF-}0\). This leads to the next level of our hierarchy, \(\text {TS-UF-}1\), where the non-triviality condition is that a partial signature of M was requested from at most \(t-1-c\) honest servers, where c is the number of corrupted servers. Does threshold BLS achieve this \(\text {TS-UF-}1\) definition? As we will see, proving this presents challenges, but we will succeed in showing that the answer is yes, under a variant of the computational Diffie-Hellman (CDH) assumption. (The proof is deferred to [7] for lack of space). Yet, \(\text {TS-UF-}1\) was not considered in the literature, and only \(\text {TS-UF-}0\) is proved for many other non-interactive schemes [10, 29, 37, 40]. The only exceptions are the work of Libert, Joye, and Yung [33] and recent concurrent work by Groth [28], which comes to a similar conclusion/result on BLS. (We discuss the relation below). We note that Shoup [37] implicitly tackles a similar technical challenge by dealing with differing corruption and reconstruction thresholds, but the resulting security notion is not \(\text {TS-UF-}1\).

The distinction between \(\text {TS-UF-}1\) and \(\text {TS-UF-}0\) is not just academic. Implicit in applications of threshold signing in wallets is the fact that servers also perform well-formedness checks of what is being signed (typically, as part of a transaction). \(\text {TS-UF-}1\) guarantees that every issued signature has been inspected by sufficiently many servers, but \(\text {TS-UF-}0\) does not.

Yet the hierarchy needs to go higher, and this becomes apparent when looking at partially non-interactive schemes like \(\textsf{FROST}1\) [30], and its optimized version, \(\textsf{FROST}2\), which we introduce. Here, the discussion becomes more subtle, and interesting.

In more detail, a \(\textsf{FROST}1\) pre-processing token takes the form of a pair \( pp _i = (g^{r_i}, g^{s_i})\) of group elements for one-time use. (A server will ensure that the pre-processing token in its name in the leader request is one it has previously sent, and will never use it again). An honest request \( lr \) includes, along with the message M to be signed, a sufficiently large server set \( lr .\textsf{SS}\subseteq [1..\textsf{ns}]\), and, for each i in this set, a pre-processing token \( pp _i\) that i previously sent. Each server \(i \in lr .\textsf{SS}\) will then generate a signature share \( psig _i = (R, z_i)\), where R is a value which can be computed (publicly) from the tokens included in \( lr \), whereas \(z_i\) depends on the discrete logarithms of the server’s token and its own key share \(\textit{sk}_i\). The \(z_i\)’s can then be aggregated into a value z such that (Rz) is a valid Schnorr signature for M.

In terms of our framework, we show that \(\textsf{FROST}1\) achieves \(\mathrm {TS\hbox {-}SUF}\hbox {-}3\) security. This considers a signature trivial even if some of the honest servers in \( lr .\textsf{SS}\) do not respond to a (malicious) leader request, as long as the tokens associated with these servers are not honestly generated. In particular, the honest servers may not respond because they recognize these tokens as invalid, or because the malicious leader did not submit the request to them. We show that, while \(\textsf{FROST}2\) fails to achieve \(\mathrm {TS\hbox {-}SUF}\hbox {-}3\), it achieves the next step down in our hierarchy, \(\mathrm {TS\hbox {-}SUF}\hbox {-}2\). This is still stronger than the notions lower in the hierarchy. Our proofs for \(\textsf{FROST}1\) and \(\textsf{FROST}2\) signing operations rely on the OMDL assumption and the ROM.

A stronger security goal (\(\text {TS-UF-}4\) in our hierarchy) is to expect that the only way to obtain a signature for a message M is to follow the above blueprint, i.e., to issue the same honest leader request \( lr \) to all servers in \( lr .\textsf{SS}\). In fact, we may even ask for more, in terms of strong unforgeability—the value R is uniquely defined by \( lr \), and, along with the message M, it defines a unique signature (although not efficiently computable given the verification key alone). An ideal goal, which corresponds to our strongest security goal, is to ensure that the only way to generate the signature associated with \( lr \) is to obtain a signature share for \( lr \) from every honest server whose tokens are included in \( lr \). This is a notion we refer to as \(\mathrm {TS\hbox {-}SUF}\hbox {-}4\).

We will however show that neither \(\textsf{FROST}1\) nor \(\textsf{FROST}2\) meet \(\mathrm {TS\hbox {-}SUF}\hbox {-}4\). To overcome this, we will show a general transformation which can boost the security of a \(\mathrm {TS\hbox {-}SUF}\hbox {-}3\)-secure scheme like \(\textsf{FROST}1\) to achieve \(\mathrm {TS\hbox {-}SUF}\hbox {-}4\). Our framework allows schemes more general than the FROST ones, and also leaves the question open of better and more efficient designs achieving the stronger notions. Moreover, we provide simple reference schemes for all of our notions, which, while inefficient, guide us in understanding the subtle differences among notions and baseline requirements. In particular, these schemes will enable us to separate the proposed notions.

In summary, our unforgeabilty notions declare a signature for a message M trivial in the following cases:

  • \(\underline{\text {TS-UF-}0}\): A partial signature for the message M was generated by at least one honest server.

  • \(\underline{\text {TS-UF-}1}\): A partial signature for the message M was generated by at least \(t-c\) honest servers, where c is the number of corrupted servers.

  • \(\underline{\text {TS-UF-}2}\): There exists a leader request \( lr \) for the message M which was answered by at least \(t-c\) honest servers.

  • \(\underline{\text {TS-UF-}3}\): There exists a leader request \( lr \) for the message M such that every honest server \(i \in lr .\textsf{SS}\) either answered \( lr \) or the token \( pp _i\) associated with i in \( lr \) is maliciously generated.

  • \(\underline{\text {TS-UF-}4}\): There exists a leader request \( lr \) for the message M such that every honest server \(i \in lr .\textsf{SS}\) answered \( lr \).

Analogous notions of strong unforgeability are obtained by further associating a request \( lr \) to a (unique) signature, in addition to a message M.

We stress that it is not clear which scenarios demand which notions in our hierarchy. This is especially true because we are still lacking formal analyses of full-fledged systems using threshold signatures, but it is not hard to envision a potential mismatch between natural expectations from such schemes and what they actually achieve. In both FROST variants, for example, it is natural to expect that a signature can only be generated by a sufficient number of honest servers answering the same request, a property which we show is actually achieved. Further, one may also expect that all honest servers that generated these honest tokens need to be involved in the generation of a valid signature, but this stronger property is actually not achieved by either of the FROST variants.

Our syntax above assumes key generation is carried out by a trusted algorithm, which allows us to focus on the signing protocol. However, security in practice is enhanced when the key generation itself is a distributed threshold protocol, so that the key is never in the clear in any one location, even ephemerally. In this setting, we prove the security of \(\textsf{FROST}2\) with the distributed key generation protocol (DKG) originally proposed in [30], which we refer to as \(\textsf{PedPoP}\). Our proof for the combination of \(\textsf{FROST}2\) and \(\textsf{PedPoP}\) relies on the ROM, the OMDL assumption, and a new knowledge-type assumption. However, we stress that the latter assumption is only necessary to handle \(\textsf{PedPoP}\), as indeed we give stronger proofs of security without this assumption in a setting with ideal key generation.

Our framework does not handle adaptive corruptions, i.e., we demand instead that the adversary declares its corruption set initially. We could extend our definitions to adaptive corruptions rather easily, but our concrete bounds would be impacted. In particular, we would resort to a generic reduction guessing the corrupted set beforehand, with a multiplicative loss of \(2^{\textsf{ns}}\), which is acceptable for the smaller values of the number \(\textsf{ns}\) of parties that we consider common in practice.

Our framework cannot cover recent protocols, like that of Canetti et al. [13], which combine a multi-round message-independent pre-processing phase with a final, message-dependent, round. (Conversely, their UC security analysis does not give definitions which help our fine-grained framework).

Many prior works also consider robustness, i.e., the guarantee that a signature is always produced. Here, we follow the same viewpoint as in FROST, and do not focus on robustness explicitly. This allows us to prevent imposing a small \(t\) (relative to \(\textsf{ns}\)) just for the sake of ensuring it. However, our schemes all implicitly give verification keys \(\textit{vk}_i\) for each server, and it is not hard to verify individual partial signatures \( psig _i\). Any \(t\) valid partial signatures will always aggregate into a valid signature.

A recent preprint by Groth [28] presents a general definition for fully non-interactive schemes in a setting with a (non-interactive) DKG. His definition implies \(\text {TS-UF-}1\), and he also provides a proof sketch that BLS (with his newly proposed non-interactive DKG) is secure under a variant of the OMCDH assumption, which is closely related to our variant of the CDH assumption and which we also show to be hard in the GGM. Groth’s framework is not suitable for partially non-interactive schemes like FROST, which are the main focus of our work.

This paper is the result of a (hard) merge imposed by the Crypto 2022 PC on two submissions. CKM [14] introduces \(\textsf{FROST}2\). BTZ [7] introduces the framework and definitions for non-interactive schemes with trusted key generation and proofs for BLS, \(\textsf{FROST}1\) and \(\textsf{FROST}2\) in this framework. CKM [14] provides a proof of security for \(\textsf{FROST}2\) that includes distributed key generation. Most security proofs have been deferred to the respective full versions. We see each group of authors as responsible for the contribution relevant to their part of the work.

2 Preliminaries

If \(b\ge a\ge 1\) are positive integers, then \({{\mathbb Z}}_a\) denotes the set \(\{0, \ldots , a-1\}\) and [a..b] denotes the set \(\{a,\ldots ,b\}\). If \(\boldsymbol{x}\) is a vector then \(|\boldsymbol{x}|\) is its length (the number of its coordinates), \(\boldsymbol{x}[i]\) is its i-th coordinate and \([\boldsymbol{x}] = \{\,\boldsymbol{x}[i] \, :\,1\le i\le |\boldsymbol{x}|\,\}\) is the set of all its coordinates. A string is identified with a vector over \(\{0,1\}\), so that if x is a string then x[i] is its i-th bit and |x| is its length. By \(\varepsilon \) we denote the empty vector or string. The size of a set S is denoted |S|. For sets DR let \(\textrm{FNS}(D,R)\) denote the set of all functions \(f{\,:\,}D\rightarrow R\).

Let S be a finite set. We let denote sampling an element uniformly at random from S and assigning it to x. We let \(y \leftarrow A^{\textrm{O}_1, \ldots }(x_1,\ldots ; r)\) denote executing algorithm A on inputs \(x_1,\ldots \) and coins r with access to oracles \(\textrm{O}_1, \ldots \) and letting y be the result. We let be the result of picking r at random and letting \(y \leftarrow A^{\textrm{O}_1, \ldots }(x_1,\ldots ;r)\). Algorithms are randomized unless otherwise indicated. Running time is worst case.

We use the code-based game playing framework of [6]. (See Fig. 2 for an example). Games have procedures, also called oracles. Among the oracles are \(\textsc {Init}\) (Initialize) and \(\textsc {Fin}\) (Finalize). In executing an adversary \(\mathcal{A}\) with a game \(\textrm{Gm}\), the adversary may query the oracles at will, with the restriction that its first query must be to \(\textsc {Init}\) (if present), its last to \(\textsc {Fin}\), and it can query these oracles at most once. The value returned by the \(\textsc {Fin}\) procedure is taken as the game output. By \(\textrm{Gm}(\mathcal{A}) \Rightarrow y\) we denote the event that the execution of game \(\textrm{Gm}\) with adversary \(\mathcal{A}\) results in output y. We write \(\Pr [\textrm{Gm}(\mathcal{A})]\) as shorthand for \(\Pr [\textrm{Gm}(\mathcal{A})\Rightarrow \textsf{true}]\), the probability that the game returns \(\textsf{true}\).

In writing game or adversary pseudocode, it is assumed that Boolean variables are initialized to \(\textsf{false}\), integer variables are initialized to 0 and set-valued variables are initialized to the empty set \(\emptyset \).

Let \(\mathbb {G}\) be a group of order p. We will use multiplicative notation for the group operation, and we let \(1_{\mathbb {G}}\) denote the identity element of \(\mathbb {G}\). We let \(\mathbb {G}^*=\mathbb {G}\setminus \{1_{\mathbb {G}}\}\) denote the set of non-identity elements, which is the set of generators of \(\mathbb {G}\) if the latter has prime order. If \(g \in \mathbb {G}^*\) is a generator and \(X \in \mathbb {G}\), the discrete logarithm base g of X is denoted \(\textsf{DL}_{\mathbb {G}, g}(X)\), and it is in the set \({{\mathbb Z}}_{|\mathbb {G}|}\).

3 A Framework for Non-interactive Threshold Signatures

We present our hierarchy of definitions of security for non-interactive threshold schemes, formalizing both unforgeability (UF) and strong unforgeability (SUF) in several ways. We provide relations between all notions considered.

3.1 Syntax and Correctness

Parties as implemented in protocols would maintain state. When activated with some inputs (which include messages from other parties), they would apply some algorithm \(\textsf{Alg}\) to these and their current state to get outputs (including outgoing messages) and an updated state. To model this, we do not change our definition of algorithms, but make the state an explicit input and output that will, in definitions, be maintained by the overlying game. Thus, we would write something like .

A non-interactive threshold signature scheme \(\textsf{TS}\) specifies a number \(\textsf{ns}\ge 1\) of servers, a reconstruction threshold \(t\), a set \(\textsf{HF}\) of functions from which the random oracle is drawn, a key generation algorithm \(\textsf{Kg}\), a server pre-processing algorithm \(\textsf{SPP}\), a leader pre-processing algorithm \(\textsf{LPP}\), a leader signing-request algorithm \(\textsf{LR}\), a server partial-signature algorithm \(\textsf{PS}\), a leader partial-signature aggregation algorithm \(\textsf{Agg}\) and a verification algorithm \(\textsf{Vf}\). If disambiguation is needed, we write \(\textsf{TS}.\textsf{ns}, \textsf{TS}.t, \textsf{TS}.\textsf{HF}, \textsf{TS}.\textsf{Kg},\textsf{TS}.\textsf{SPP}, \textsf{TS}.\textsf{LPP}, \textsf{TS}.\textsf{LR}, \textsf{TS}.\textsf{PS}, \textsf{TS}.\textsf{Agg}, \textsf{TS}.\textsf{Vf}\), respectively. We now explain the operation and use of these components, the understanding of which may be aided by already looking at the correctness game \(\textbf{G}^{\text {ts-cor}}_{\textsf{TS}}\) of Fig. 1.

Parties involved are a leader (numbered 0, implicit in some prior works, but made explicit here) and servers numbered \(1,\ldots ,\textsf{ns}\), for a total of \(\textsf{ns}+1\) parties. Algorithms have oracle access to a function \(\textsf{h}\) that is drawn at random from \(\textsf{HF}\) in games (line 1 Fig. 1) and plays the role of the random oracle. Specifying \(\textsf{HF}\) as part of the scheme allows the domain and range of the random oracle to be scheme dependent.

Fig. 1.
figure 1

Game used to define correctness of threshold signature scheme \(\textsf{TS}\) with threshold t.

The key generation algorithm \(\textsf{Kg}\), run once at the beginning (line 1 of Fig. 1), creates a public signature-verification key \(\textit{vk}\), associated public auxiliary information \(\textit{aux}\) and an individual secret signing key \(\textit{sk}_i\) for each server \(i\in [1..\textsf{ns}]\). (Usually, \(\textit{sk}_1,\ldots ,\textit{sk}_{\textsf{ns}}\) will be shares of a global secret key \(\textit{sk}\), but the definitions do not need to make \(\textit{sk}\) explicit. The leader does not hold any secrets associated to \(\textit{vk}\)). While key generation may in practice be performed by a distributed key generation protocol, our syntax assumes it done by a trusted algorithm to allow a modular treatment. Keys are held by parties in their state, encoded into dedicated fields of the latter as shown at line 3 of Fig. 1. For specific scheme, we will typically use \(\textit{aux}\) to model additional information that can be leaked by key generation step without violating security (e.g., the values \(g^{\textit{sk}_i}\) in most cases).

The signing protocol can be seen as having two rounds, which we think as a pre-processing and online stage. In a pre-processing round, any server i can run to get a pre-processing token \( pp \) which it sends to the leader. (Here \(\textsf{st}_i\) is the state of i.) Via \(\textsf{st}_0 \leftarrow \textsf{LPP}[\textsf{h}]( pp ,\textsf{st}_0)\), the leader updates its state \(\textsf{st}_0\) to incorporate token \( pp \). (In Fig. 1, this is reflected in lines 5–7).

In a signing round the leader begins with a message and a choice of a signer set \( SS \subseteq [1..\textsf{ns}]\) of size at least t. Via it generates a leader request \( lr \) that, through \(\textsf{st}_0\), implicitly depends on a choice of pre-processing tokens. (Lines 8,9 of Fig. 1). The leader request is sent to each \(i\in SS \), who, via , computes a partial signature \( psig _i\) and returns it to the leader. Via , the leader aggregates the partial signatures into a signature \( sig \) of M, the desired output of the protocol. (Lines 12–14 of Fig. 1).

The verification algorithm, like in a standard signature scheme, takes \(\textit{vk}\), a message M and a candidate signature, and returns a boolean validity decision.

We define a sub-class of non-interactive threshold schemes that we call echo schemes. Recall that a leader request \( lr \) is mandated to specify a message \( lr .\textsf{msg}\) and a set \( lr .\textsf{SS}\subseteq [1..\textsf{ns}]\) of servers from whom partial signatures are being requested. In an echo scheme, \( lr \) additionally specifies a function \( lr .\textsf{PP}{\,:\,} lr .\textsf{SS}\rightarrow \{0,1\}^*\). If the leader is honest, \( lr .\textsf{PP}(i)\) is a token \( pp \) that i had previously sent to the leader. That is, the leader is echoing tokens back to the servers, whence the name. In considering security, of course, \( lr .\textsf{PP}(i)\) is picked by the adversary and may not be a prior token. As we will discuss in Sect. 4.1, FROST is a typical example of an echo scheme.

The game of Fig. 1 defines correctness, and serves also to detail the above. Recall that \(\textsf{TS}\) specifies a threshold \(t\in [1..\textsf{ns}]\). The adversary will make the leader’s pre-processing requests, via oracle \(\textsc {PPO}\). It will likewise make signing requests via oracle \(\textsc {PPO}\). If any condition listed under Require: fails the adversary is understood as losing, the game automatically returning \(\textsf{false}\). We let \(\textbf{Adv}^{\mathrm {ts\hbox {-}corr}}_{\textsf{TS}}(\mathcal{A}) = \Pr [\textbf{G}^{\text {ts-cor}}_{\textsf{TS}}(\mathcal{A})]\) be the advantage of an adversary \(\mathcal{A}\). The default requirement is perfect correctness, which means that \(\textbf{Adv}^{\mathrm {ts\hbox {-}corr}}_{\textsf{TS}}(\mathcal{A}) = 0\) for all \(\mathcal{A}\), regardless of computing time and number of oracle queries, but this can be relaxed, as may be necessary for lattice-based protocols.

The way in which we are supposed to interpret the correctness definition is that a request \( lr \) is associated with a set \( SS \) and a message M, and if such a request is issued successfully by the leader (i.e., \( lr \ne \bot \)), then the servers in \( SS \) would all accept \( lr \) producing partial signatures which aggregate into a valid signature for M. We note that this definition assumes that we submit requests to all servers in the same order. One can give a stronger (but more complex) definition which ensures correctness even when servers process requests in different orders, but note that for all schemes we discuss below they will be equivalent, and we hence omit the more cumbersome game to define it.

Fig. 2.
figure 2

Games used to define TS-UF-\(i\) and TS-SUF-\(i\) unforgeability of threshold signature scheme \(\textsf{TS}\). Line 20 is included only in game and line 21 only in game \(\textbf{G}^{\text {ts-suf-}{i}}_{\textsf{TS}}\). These lines refer to the trivial-forgery predicates \(\texttt{tf}_{i}( lr )\) and trivial-strong-forgery predicates \(\texttt{tsf}_{i}( lr ,\textit{vk}, sig )\) from Fig. 3. In particular, the set \(\textrm{S}_3( lr )\) and, thus, TS-UF-\(3\) and TS-SUF-\(3\) unforgeability are defined only if \(\textsf{TS}\) is an echo scheme.

3.2 Unforgeability and Strong Unforgeability

Unforgeability as usual asks that the adversary be unable to produce a valid signature \( sig \) on some message M of its choice except in a trivial way. The question is what “trivial” means. For regular signatures, it means that the adversary did not obtain a signature of M from the signing oracle [27]. For threshold signatures, it is more subtle. We will give several definitions.

Figure 2 simultaneously describes several games, for \(i=0,1,2,3,4\), where is only defined if \(\textsf{TS}\) is an echo scheme. (We will get to the second set of games later). They are almost the same, differing only at line 20. The corresponding advantages of an adversary \(\mathcal{A}\) are . The adversary calls \(\textsc {Init}\) with a choice of a set of servers to corrupt. It is also viewed as having corrupted the leader. Playing the leader role, it can request pre-processing tokens via oracle \(\textsc {PPO}\). It can provide a server with a leader-request \( lr \) of its choice to obtain a partial signature \( psig \). At the end, it outputs to \(\textsc {Fin}\) its forgery message M and signature \( sig \). If the signature is not valid, line 18 ensures that the adversary does not win. Now, to win, the signature must be non-trivial. It is in how this is defined that the games differ. Associated to i is a trivial-forgery predicate \(\texttt{tf}_i\) that is invoked at line 20. The choices for these predicates are shown in the table in Fig. 3, and the notion corresponding to game \(\texttt{tf}_i\) is denoted \(\text {TS-UF-}i\). When \(i=0\) we have the usual notion from the literature, used in particular in [8, 23, 25]. As i increases, we get more stringent (less generous) in declaring a forgery trivial, and the notion gets stronger.

Concretely, \(\text {TS-UF-}0\) considers a signature for a message M trivial if a request \( lr \) with \( lr .\textsf{msg}\) was answered by server with a partial signature. Moving on, \(\text {TS-UF-}1\) strengthens this by declaring a signature trivial only if at least \(t - | CS |\) servers have responded to some request for message M, where these requests could have been different. In turn, \(\text {TS-UF-}2\) strengthens this even further by requiring that there was a single prior request \( lr \) for M which was answered by \(t - | CS |\) servers.

The notion \(\text {TS-UF-}3\) only deals with echo schemes. Recall that for these schemes, a request \( lr \) contains a map \( lr .\textsf{PP}{\,:\,} lr .\textsf{SS}\rightarrow \{0,1\}^*\), where \( lr .\textsf{PP}(i)\) is meant to be a token issued by server i. Here, we consider a signature for message M trivial if there exists a request \( lr \) for M which is answered by all honest servers i for which \( lr .\textsf{PP}(i)\) is a valid token previously output by i, and this set consists of at least \(t - | CS |\) servers. Finally, our strongest notion, \(\text {TS-UF-}4\) simply considers a signature trivial if there exists a request \( lr \) for M which is answered by all honest servers in \(i \in lr .\textsf{SS}\).

It is natural to expect \(\text {TS-UF-}3\) and \(\text {TS-UF-}4\) to be similar, but as we will see below, they are actually not equivalent. (Although we will give a transformation that boosts an \(\text {TS-UF-}3\)-secure scheme into an \(\text {TS-UF-}4\)-secure one).

Fig. 3.
figure 3

Top: Trivial-forgery conditions \(\texttt{tf}_{i}( lr )\) (\(i=0,1,2,3,4\)) and trivial-strong-forgery conditions \(\texttt{tsf}_{i}( lr ,\textit{vk}, sig )\) (\(i=1,2,3,4\)) used to define TS-SUF-\(i\) and TS-SUF-\(i\) security in games and \(\textbf{G}^{\text {ts-suf-}{i}}_{\textsf{TS}}\), respectively. Bottom: Relations between notions of security.

For standard signatures, strong unforgeability asks, in addition to unforgeability, that the adversary be unable to produce a new signature on any message, where new means different from any obtained legitimately for that message. We ask, does this have any counterpart in threshold signatures? In fact, FROST seems to have such a property. We now provide formalisms to capture such properties.

It turns out that giving a general definition of strong unforgeability is rather complex, and we will restrict ourselves to a natural sub-class of schemes (which includes FROST). Concretely, we ask that there is an algorithm \(\textsf{SVf}\), called a strong verification algorithm, that takes a public key \(\textit{vk}\), a leader request \( lr \), and a signature \( sig \) as inputs and outputs \(\textsf{true}\) or \(\textsf{false}\). We require that for any \(\textit{vk}, lr \) there exists at most one signature \( sig \) such that \(\textsf{SVf}(\textit{vk}, lr , sig ) = \textsf{true}\). Also, \(\textsf{TS}\) is asked to satisfy a strong correctness property which is defined using the same game as \(\textbf{G}^{\text {ts-cor}}_{\textsf{TS}}\) except the condition \(\textsf{Vf}[\textsf{h}](\textit{vk},M, sig )=\textsf{false}\) in line 15 is replaced with \(\textsf{SVf}[\textsf{h}](\textit{vk}, lr , sig )=\textsf{false}\).

For a scheme \(\textsf{TS}\) with a strong verification algorithm, we consider the \(\textbf{G}^{\text {ts-suf-}{i}}_{\textsf{TS}}\) (\(i=2,3,4\)) games in Fig. 2, where \(\textbf{G}^{\text {ts-suf-}{3}}_{\textsf{TS}}\) is only defined if \(\textsf{TS}\) additionally is an echo scheme. The differences (across the different values of i) are only in the trivial-strong forgery predicates \(\texttt{tsf}_i\) used at line 21, and the choices are again shown in the table in Fig. 3. The corresponding advantage of an adversary \(\mathcal{A}\) is \(\textbf{Adv}^{\mathrm {ts\hbox {-}suf\hbox {-}{i}}}_{\textsf{TS}}(\mathcal{A}) = \Pr [\textbf{G}^{\text {ts-suf-}{i}}_{\textsf{TS}}(\mathcal{A})]\). The ensuing notion is called \(\mathrm {TS\hbox {-}SUF}\hbox {-}i\).

3.3 Relations and Transformations

Figure 3 shows relations between the notions of unforgeability and strong unforgeabilty that we have defined. A (non-dotted) arrow \(\textrm{A}\rightarrow \textrm{B}\) is an implication, saying that \(\textrm{A}\) implies \(\textrm{B}\): any scheme that is \(\textrm{A}\)-secure is also \(\textrm{B}\)-secure. Now see the nodes as forming a graph with edges the non-dotted arrows. The thin arrow from \(\text {TS-UF-}0\) to \(\text {TS-UF-}1\) indicates us that the implication only holds under a quantitatively loose reduction. (We prove this in Theorem 1). We claim that in this graph, if there is no path from a notion \(\textrm{B}\) to a notion \(\textrm{A}\), they are separate or distinct: there exists a scheme that is \(\textrm{B}\)-secure but not \(\textrm{A}\)-secure. The dotted arrows are separations that we explicitly prove. These, together with the full arrows, prove the claim just made. The thick dotted arrows indicate the existence of a generic transformation lifting security of a scheme to achieve a stronger notion. (We establish this below as part of Theorem 2).

In [7], we give a set of (fully) non-interactive threshold schemes that we call reference schemes. They represent simple, canonical ways to achieve the different notions. They may not be of practical interest, because they have key and signature sizes proportional to \(\textsf{ns}\), but the point is to embody notions in a representative way. A few things emanate from these schemes. One is that we use them to establish the separations given by the dotted lines in Fig. 3, thereby showing that any notions between which there is no path, in the graph given by the full arrows, are indeed separate. Second, we get a scheme that achieves our strongest notion, \(\mathrm {TS\hbox {-}SUF}\hbox {-}4\), which neither FROST nor BLS achieve. (Although we can get such a scheme by applying our transformation from Theorem 2 to \(\textsf{FROST}1\)). Finally, reference schemes, as canonical examples, are ways to understand the notions.

\(\underline{\textsc {From}\ \text {TS-UF-}{0}\ \textsc {to}\ \text {TS-UF-}{1},\ \textsc {loosely.}}\) The following theorem shows TS-UF-\(1\) security is implied by TS-UF-\(0\) security, although with an exponential loss in t, which is acceptable in settings where t is expected to be constant.

Theorem 1

Let \(\textsf{TS}\) be a threshold signature scheme. For any TS-UF-\(1\) adversary \(\mathcal{A}\) there exists a TS-UF-\(0\) adversary \(\mathcal{B}\) such that \(\textbf{Adv}^{\mathrm {ts\hbox {-}uf\hbox {-}{1}}}_{\textsf{TS}}(\mathcal{A}) \le \left( {\begin{array}{c}\textsf{ns}\\ t- 1\end{array}}\right) \cdot \textbf{Adv}^{\mathrm {ts\hbox {-}uf\hbox {-}{0}}}_{\textsf{TS}}(\mathcal{B}){.}\) Moreover, \(\mathcal{B}\) runs in time roughly equal that of \(\mathcal{A}\), and the number of \(\mathcal{B}\)’s queries to each oracle is at most that of \(\mathcal{A}\).

If the adversary always corrupts \(t-1\) parties, it is clear that TS-UF-\(0\) and TS-UF-\(1\) are equivalent. Otherwise, in general, for an adversary that breaks TS-UF-\(1\) security and corrupts a subset \( CS \) of servers with size less than \(t-1\), if the adversary wins the game by outputting \((M^*, sig ^*)\), we know \(|\textrm{S}_1(M^*)| < t - | CS |\). Therefore, we can modify the adversary to initially guess a subset \( ECS \subseteq [1..\textsf{ns}] \setminus CS \) with size \(t - | CS | - 1\) and corrupt all parties in \( ECS \). If \( ECS \) happens to contain \(\textrm{S}_1(M^*)\), the adversary actually wins. It is not hard to see that the probability that this is true is \(1/ \left( {\begin{array}{c}\textsf{ns}- |CS|\\ t- |CS| - 1\end{array}}\right) \ge 1/\left( {\begin{array}{c}\textsf{ns}\\ t- 1\end{array}}\right) \). We give a formal proof in [7].

Fig. 4.
figure 4

The threshold signature \(\textsf{ATS}[\textsf{TS},\textsf{DS}]\) constructed from an echo scheme \(\textsf{TS}\) and a digital signature scheme \(\textsf{DS}\) such that \(\textsf{ATS}.\textsf{ns}= \textsf{TS}.\textsf{ns}\) and \(\textsf{ATS}.t= \textsf{TS}.t\). The algorithm \(\textsf{OriginLR}\) transforms a well-formed leader request \( lr \) for \(\textsf{ATS}\) to a well-formed leader request in \(\textsf{TS}\). \(\textsf{st}_0.\textsf{SigMap}\) is a table that stores the signature corresponding to each token generated by honest servers, which is initially set to empty. \(\textrm{PS}\) denotes a set of partial signatures.

\(\underline{\textsc {From}\ \text {TS-(S)UF-}{3}\ \textsc {to}\ \text {TS-(S)UF-}{4}.}\) Figure 4 gives a general transformation from TS-(S)UF-\(3\) security to TS-(S)UF-\(4\) security. Concretely, we give a construction \(\textsf{ATS}\) from any TS-(S)UF-\(3\)-secure echo scheme \(\textsf{TS}\) and a digital signature scheme \(\textsf{DS}\). The size of signatures produced by \(\textsf{ATS}\) and the verification algorithm \(\textsf{Vf}\) are exactly the same as \(\textsf{TS}\). The main idea is to use signatures to authenticate each token contained in a leader request \( lr \) from \(\textsf{TS}\), so that an honest server only answers the request if all the authentications are valid. The rest of the protocol remains the same.

In the game \(\textbf{G}^{\text {ts-(s)uf-}{4}}_{\textsf{ATS}}\), we can show that as long as the adversary does not break the strong unforgeability of \(\textsf{DS}\), for any leader request \( lr \) such that \(\textrm{S}_2( lr ) > 0\), it holds that \(\textrm{S}_3( lr ) = \textrm{S}_4( lr )\), which implies the conditions \(\texttt{tf}_{3}\) and \(\texttt{tf}_{4}\) are equivalent. Therefore, we can reduce TS-(S)UF-\(4\) security of \(\textsf{ATS}\) to TS-(S)UF-\(3\) security of \(\textsf{TS}\) and SUF-CMA security of \(\textsf{DS}\). (The latter notion is formally defined via the game in Fig. 5). This is captured by the following theorem. (The proof is in [7]).

Fig. 5.
figure 5

The game \(\textbf{G}^{\text {suf-cma}}_{\textsf{DS}}\), where \(\textsf{DS}\) is a digital signature scheme.

Theorem 2

Let \( XX \in \{ SUF , UF \}\). Let \(\textsf{TS}\) be an echo scheme and \(\textsf{DS}\) be a digital signature scheme. For any TS-XX-\(4\) adversary \(\mathcal{A}\) there exists a TS-XX-\(3\) adversary \(\mathcal{B}\) and a SUF-CMA adversary \(\mathcal {C}\) such that

$$\textbf{Adv}^{\mathrm {ts\hbox {-}xx\hbox {-}{4}}}_{\textsf{ATS}[\textsf{TS},\textsf{DS}]}(\mathcal{A}) \le \textbf{Adv}^{\mathrm {ts\hbox {-}xx\hbox {-}{3}}}_{\textsf{TS}}(\mathcal{B}) + \textsf{ns}\cdot \textbf{Adv}^{\mathrm {\text {suf-cma}}}_{\textsf{DS}}(\mathcal {C}){.}$$

Moreover, \(\mathcal{B}\) and \(\mathcal {C}\) run in time roughly equal that of \(\mathcal{A}\). The number of \(\mathcal{B}\)’s queries to each oracle is at most that of \(\mathcal{A}\). The number of \(\mathcal {C}\)’s \(\textsc {PPO}\) queries is at most the number of \(\textsc {PPO}\) queries made by \(\mathcal{A}\).

4 The Security of FROST

Fig. 6.
figure 6

The protocol \(\textsf{FROST}1[\mathbb {G}]\) and \(\textsf{FROST}2[\mathbb {G}]\), where \(\mathbb {G}\) is a cyclic group with prime order p and generator g. Further, \(\textsf{ns}\) is the number of parties, and \(t\) is the threshold of the schemes. We require \(t\le \textsf{ns}\le p - 1\). The protocol \(\textsf{FROST}1\) contains all but the dashed box, and the protocol \(\textsf{FROST}2\) contains all but the solid box. The function \(\textsf{h}_i(\cdot )\) is computed as \(\textsf{h}(i,\cdot )\) for \(i=1,2\). \(\textrm{PS}\) denotes a set of partial signatures.

4.1 The \(\textsf{FROST}1\) and \(\textsf{FROST}2\) Schemes

This section revisits the security of FROST, first proposed in [31] by Komlo and Goldberg, as a (partially) non-interactive threshold signature scheme.

First, we consider the original scheme, which we refer to as \(\textsf{FROST}1\). We then present \(\textsf{FROST}2\), an optimized version that reduces the number of exponentiations required for signing and verification from \(| lr .\textsf{SS}|\) to one. We give a detailed description of both schemes in Fig. 6. The leader state \(\textsf{st}_0\) contains a set \(\textrm{curPP}_i\) for each server i representing the set of tokens generated by server i that has not yet been used in a signing request. The state \(\textsf{st}_i\) for server i contains a function \(\textrm{mapPP}\) that maps each token \( pp \) to the randomness that is used to generate \( pp \) and \(\textsf{st}_i.\textrm{mapPP}( pp ) = \bot \) if \( pp \) is not generated by server i yet or has already been used in a signing request. The coefficient \(\lambda _i^{ lr .\textsf{SS}}\) in line 39 is the Lagrange coefficient for the set \( lr .\textsf{SS}\), which is defined (for any set \(S \subseteq [1..\textsf{ns}]\)) as

$$\lambda _i^{S} := \prod _{j\in S, i\ne j} \frac{j}{j - i}{.}$$

The algorithm \(\textsf{CompPar}\) is a helper algorithm that computes the parameters \(R, c, \{d_i\}_{i\in lr .\textsf{SS}}\) used during signing. The difference between \(\textsf{FROST}1\) and \(\textsf{FROST}2\) is the way \(d_i\) is computed in \(\textsf{CompPar}\). In \(\textsf{FROST}1\), each \(d_i\) is a different hash value for each server i, while in \(\textsf{FROST}2\), \(d_i\)’s are the same hash value for all servers.

It is not hard to verify that both schemes satisfy perfect correctness.

We begin by showing that \(\textsf{FROST}2\) is TS-SUF-\(2\)-secure (under OMDL) but not TS-UF-\(3\)-secure. We then show that \(\textsf{FROST}1\) is TS-SUF-\(3\)-secure but not TS-UF-\(4\)-secure. Theoretically, our results imply the separations between TS-(S)UF-\(2\) and TS-(S)UF-\(3\) and between TS-(S)UF-\(3\) and TS-(S)UF-\(4\). Practically speaking, our results indicate a separation between the security of \(\textsf{FROST}1\) and \(\textsf{FROST}2\). To complete the picture, a TS-SUF-\(4\)-secure variant of \(\textsf{FROST}1\) can be obtained via the general transformation from Theorem 2, although it is an interesting open question whether a more efficient variant exists.

4.2 TS-SUF-2 Security of \(\textsf{FROST}2\)

Fig. 7.
figure 7

The OMDL game, where \(\mathbb {G}\) is a cyclic group with prime order p and generator g.

We first show that \(\textsf{FROST}2\) is TS-SUF-\(2\)-secure in the ROM under the OMDL assumption, which is formally defined in Fig. 7. Formally, we show the following theorem.

Theorem 3

For any TS-SUF-\(2\) adversary \(\mathcal{A}\) making at most \(q_s\) queries to \(\textsc {PPO}\) and at most \(q_h\) queries to \(\textsc {RO}\), there exists an OMDL adversary \(\mathcal{B}\) making at most \(2q_s+ \textsf{ns}\) queries to \(\textsc {Chal}\) such that

$$\textbf{Adv}^{\mathrm {ts\hbox {-}suf\hbox {-}{2}}}_{\textsf{FROST}2[\mathbb {G}]}(\mathcal{A}) \le \sqrt{q\cdot (\textbf{Adv}^{\textrm{omdl}}_{\mathbb {G}}(\mathcal{B}) + 3q^2/p)}\;,$$

where \(q = q_s+ q_h+ 1\). Moreover, \(\mathcal{B}\) runs in time roughly equal two times that of \(\mathcal{A}\), plus the time to perform at most \((4\textsf{ns}+ 2)\cdot q + 2q_s+ 2\textsf{ns}^2\) exponentiations and group operations.

The core of the proof is a reduction from OMDL [2], which will need to use rewinding (via a variant of the Forking Lemma). The main challenge is to ensure that the reduction can simulate properly with a number of queries to \(\textsc {Dlog}\) which is smaller than the number of DL challenges. Further below, we are going to show that \(\textsf{FROST}2\) is not TS-UF-3 secure, thus showing the above result is optimal with respect to our hierarchy.

Proof

(of Theorem 3). Let \(\mathcal{A}\) be an adversary as described in the theorem. Denote the output message-signature pair of \(\mathcal{A}\) as \((M^*, sig ^* = (R^*, z^*))\). Without loss of generality, we assume \(\mathcal{A}\) always queries \(\textsc {RO}\) on \(\textsf{h}_2(\textit{vk}, M^*, R^*)\) before \(\mathcal{A}\) returns and always queries \(\textsc {RO}\) on \(\textsf{h}_1(\textit{vk}, lr )\) prior to the query \(\textsc {PSignO}(i, lr )\) for some i and \( lr \). (This adds up to \(q_s\) additional \(\textsc {RO}\) queries, and we let \(q = q_h+ q_s+ 1\)). Denote \( lr ^*\) as the leader query such that \(\textsf{h}_1(\textit{vk}, lr ^*)\) is the first query prior to the query \(\textsf{h}_2(\textit{vk}, M^* , R^*)\) satisfying \(\textsf{SVf}[\textsf{h}](\textit{vk}, lr ^*, sig ^*) = \textsf{true}\). If such \( lr ^*\) does not exists, \( lr ^*\) is set to \(\bot \). Denote the event \(E_1\) as

$$\textsf{Vf}[\textsf{h}](\textit{vk},M^*, sig ^*) \;\wedge \;( lr ^* = \bot \;\vee \;\textrm{S}_2( lr ^*) < t - |CS|){.}$$

It is clear that if \(\mathcal{A}\) wins the game \(\textbf{G}^{\text {ts-suf-}{2}}_{\textsf{FROST}2}\), then \(E_1\) must occur, which implies \(\Pr [E_1] \ge \textbf{Adv}^{\mathrm {ts\hbox {-}suf\hbox {-}{2}}}_{\textsf{FROST}2[\mathbb {G}]}(\mathcal{A})\). Therefore, the theorem will follow from the following lemma. (We isolate this statement as its own lemma also because it will be helpful in the proof of Theorem 5 below).    \(\square \)

Lemma 4

There exists an OMDL adversary \(\mathcal{B}\) making at most \(2q_s+ t\) queries to \(\textsc {Chal}\) such that

$$\Pr [E_1] \le \sqrt{q \cdot (\textbf{Adv}^{\textrm{omdl}}_{\mathbb {G}}(\mathcal{B}) + 3q^2/p)}{.}$$

Moreover, \(\mathcal{B}\) runs in time roughly twice that of \(\mathcal{A}\), plus the time to perform at most \((4\textsf{ns}+ 2)\cdot q + 2q_s+ 2\textsf{ns}^2\) exponentiations and group operations.

The proof of Lemma 4 is in [7]. It uses a variant of the general Forking Lemma of [3], also given in 4, that allows us to get better bounds in our analysis.

4.3 TS-SUF-\(3\) Security of \(\textsf{FROST}1\)

In this section, we show that \(\textsf{FROST}1\) is TS-SUF-\(3\)-secure in the ROM under the OMDL assumption. Formally, we show the following theorem.

Theorem 5

For any TS-SUF-\(3\) adversary \(\mathcal{A}\) making at most \(q_s\) queries to \(\textsc {PPO}\) and at most \(q_h\) queries to \(\textsc {RO}\), there exists an OMDL adversary \(\mathcal{B}\) making at most \(2q_s+ t\) queries to \(\textsc {Chal}\) such that

$$\textbf{Adv}^{\mathrm {ts\hbox {-}suf\hbox {-}{3}}}_{\textsf{FROST}1[\mathbb {G}]}(\mathcal{A}) \le 4\textsf{ns}\cdot q \cdot \sqrt{\textbf{Adv}^{\textrm{omdl}}_{\mathbb {G}}(\mathcal{B}) + 6q/p} \;,$$

where \(q = q_s+ q_h+ 1\). Moreover, \(\mathcal{B}\) runs in time roughly equal two times that of \(\mathcal{A}\), plus the time to perform at most \(6\textsf{ns}\cdot q + 4q_s+ 2\textsf{ns}^2\) exponentiations and group operations.

The proof here follows a similar pattern than that of Theorem 3, but will be more complex. In particular, the lesser tight bound is due to the fact that we need to consider an additional bad event, which we upper bound via a different reduction from OMDL. As we explain in detail below, this reduction will make use of a looser Forking Lemma, which is a variant of the “Local Forking Lemma” [1], which only resamples a single random oracle output when rewinding. The extra looseness is due to needing to ensure an extra condition when rewinding.

Proof

(of Theorem 5). Let \(\mathcal{A}\) be the adversary described in the theorem. Denote the output message-signature pair of \(\mathcal{A}\) as \((M^*, sig ^* = (R^*, z^*))\). Without loss of generality, we assume \(\mathcal{A}\) always queries \(\textsc {RO}\) on \(\textsf{h}_2(\textit{vk}, M^* , R^*)\) before \(\mathcal{A}\) returns and always queries \(\textsc {RO}\) on \(\textsf{h}_1(\textit{vk}, lr , i)\) prior to the query \(\textsc {PSignO}(i, lr )\) for some i and \( lr \). (This adds up to \(q_s\) additional \(\textsc {RO}\) queries, and we let \(q = q_h+ q_s+ 1\)). Denote \( lr ^*\) as the leader query such that \(\textsf{h}_1(\textit{vk}, lr ^*, i)\) is the first \(\textsc {RO}\) query prior to the \(\textsf{h}_2(\textit{vk}, M^* , R^*)\) query for some i satisfying \(\textsf{SVf}[\textsf{h}](\textit{vk}, lr ^*, sig ^*) = \textsf{true}\). If such \( lr ^*\) does not exist, \( lr ^*\) is set to \(\bot \). Denote the event \(E_1\) as

$$\textsf{Vf}[\textsf{h}](\textit{vk},M^*, sig ^*) \;\wedge \;( lr ^* = \bot \;\vee \;\textrm{S}_2( lr ^*) < t - |CS|){.}$$

Denote the event \(E_2\) as

$$\textsf{Vf}[\textsf{h}](\textit{vk},M^*, sig ^*) \;\wedge \; lr ^* \ne \bot \;\wedge \;\textrm{S}_2( lr ^*) \ne \textrm{S}_3( lr ^*){.}$$

If \(\mathcal{A}\) wins the game \(\textbf{G}^{\text {ts-suf-}{3}}_{\textsf{FROST}2}\) and \( lr ^*\ne \bot \), we know either \(\textrm{S}_2( lr ^*) < t - |CS|\) or \(\textrm{S}_2( lr ^*) \ne \textrm{S}_3( lr ^*)\). Therefore, if \(\mathcal{A}\) wins the game \(\textbf{G}^{\text {ts-suf-}{3}}_{\textsf{FROST}2}\), then either \(E_1\) or \(E_2\) occurs, which implies

$$\textbf{Adv}^{\mathrm {ts\hbox {-}suf\hbox {-}{3}}}_{\textsf{FROST}1[\mathbb {G}]}(\mathcal{A}) \le \Pr [E_1] + \Pr [E_2] \le 2\max \{\Pr [E_1],\Pr [E_2]\} {.}$$

Thus, we conclude the theorem with the following two lemmas.

Lemma 6

There exists an OMDL adversary \(\mathcal{B}\) making at most \(2q_s+ t\) queries to \(\textsc {Chal}\) such that

$$\Pr [E_1] \le \sqrt{q\cdot (\textbf{Adv}^{\textrm{omdl}}_{\mathbb {G}}(\mathcal{B}) + 3q^2(\textsf{ns}+ 1)^2/p)} \;,$$

Moreover, \(\mathcal{B}\) runs in time roughly equal two times that of \(\mathcal{A}\), plus the time to perform at most \(6\textsf{ns}\cdot q + 4q_s+ 2\textsf{ns}^2\) exponentiations and group operations.

Lemma 7

There exists an OMDL adversary \(\mathcal{B}\) making at most \(2q_s\) queries to \(\textsc {Chal}\) such that

$$\Pr [E_2] \le \textsf{ns}\cdot q\sqrt{2(\textbf{Adv}^{\textrm{omdl}}_{\mathbb {G}}(\mathcal{B}) + 1/p)} {.}$$

Moreover, \(\mathcal{B}\) runs in time roughly equal two times that of \(\mathcal{A}\), plus the time to perform at most \(6\textsf{ns}\cdot q + 4q_s+ 2\textsf{ns}^2\) exponentiations and group operations.

The proof of Lemma 6 is almost the same as Lemma 4, so we omit the full proof. The only difference is that \(\mathcal {C}\) takes as input \(h_1,\dots ,h_{(\textsf{ns}+ 1)q}\) in order to simulate all \(\textsc {RO}\) queries. For a \(\textsc {RO}\) query \(\textsf{h}_1(\textit{vk}, lr , i)\), \(\mathcal {C}\) first enumerates all \(i'\in [\textsf{ns}]\) and assigns \(h_{(\textrm{ctr}_h-1)(\textsf{ns}+ 1) + i'}\) to \(\textsf{h}_1(\textit{vk}, lr , i')\). Then, \(\mathcal {C}\) computes the nonce R for \( lr \) and assigns \(h_{\textrm{ctr}_h(\textsf{ns}+ 1)}\) to \(\textsf{h}_2(\textit{vk}, lr .\textsf{msg}, R)\) if it is not assigned any value yet. Similarly, for a new \(\textsc {RO}\) query \(\textsf{h}_1(\textit{vk}, M , R)\), its value is set to \(h_{\textrm{ctr}_h(\textsf{ns}+ 1)}\). The rest follows by similar analysis.

To prove Lemma 7, we need a variant of the Local Forking Lemma of [1], which is given in [7] along with the proof of Lemma 7 itself.

4.4 Attacks for \(\textsf{FROST}1\) and \(\textsf{FROST}2\)

Fig. 8.
figure 8

Adversary \(\mathcal{A}\) that wins the game , where M is a fixed message.

Fig. 9.
figure 9

Adversary \(\mathcal{A}\) that wins the game \(\textbf{G}^{\text {ts-uf-}{4}}_{\textsf{FROST}1}\), where M is a fixed message.

\(\underline{\textsf{FROST}2\ \textsc {is}\ \textsc {not}\ \text {TS-UF-}{3}\ \textsc {secure.}}\) Consider the setting where \(\textsf{ns}= 4\) and \(t= 3\) and the adversary \(\mathcal{A}\) for the game described in Fig. 8. We now show that \(\textbf{Adv}^{\mathrm {ts\hbox {-}uf\hbox {-}{3}}}_{\textsf{FROST}2}(\mathcal{A}) = 1\). From the execution of \(\textsc {PSignO}\), we know \(g^{z_1} = R_1S_1^d\textit{vk}_1^{\lambda _1^{\{1,2,3\}} \cdot c}\). Therefore,

$$\begin{aligned}g^z&= R_1^\gamma S_1^{d\cdot \gamma } \textit{vk}_1^{\gamma \cdot \lambda _1^{\{1,2,3\}} \cdot c} \textit{vk}_3^{\lambda _3^{\{1,3,4\}} \cdot c} \textit{vk}_4^{\lambda _4^{\{1,3,4\}} \cdot c} \\&= R g^{c\cdot \sum _{i\in \{1,3,4\}} \lambda _i^{\{1,3,4\}}\cdot \textit{sk}_i} = R \cdot \textit{vk}^c \;, \end{aligned}$$

which implies (M, (Rz)) is valid for \(\textit{vk}\). Also, it is clear that \(\textrm{S}_2( lr ) = \{1\}\) and \(\textrm{S}_3( lr ) = \{1,2\}\), which implies the condition \(\texttt{tf}_{3}( lr )\) does not hold. Therefore, \(\mathcal{A}\) wins the game with probability 1.

\(\underline{\textsf{FROST}1\ \textsc {is}\ \textsc {not}\ \text {TS-UF-}{4}\ \textsc {secure.}}\) Consider the setting where \(\textsf{ns}= 20\) and \(t= 3\) and the adversary \(\mathcal{A}\) for the game described in Fig. 9. We now show that \(\textbf{Adv}^{\mathrm {ts\hbox {-}uf\hbox {-}{4}}}_{\textsf{FROST}1}(\mathcal{A}) = 1\). From the execution of \(\textsc {PSignO}\), we know \(g^{z_1} = R_{1}S_{1}^{d_{11}}\textit{vk}_{11}^{\lambda _{11}^{\{11,15,20\}} \cdot c}\). The key observation here is that \(\lambda _{11}^{\{11,15,20\}} = \frac{15\cdot 20}{(15 -11)(20 -11)} = \frac{25}{3} = \frac{5\cdot 10}{(5 - 11)(10 -11)} = \lambda _{11}^{\{5, 10, 11\}} {.}\) Therefore,

$$\begin{aligned} g^z&= R_1 S_1^{d_{11}} g^{r_2 + r_3 + s_2 \cdot d_{15} +s_3 \cdot d_{20}} \textit{vk}_{11}^{\lambda _{11}^{\{11,15,20\}} \cdot c} \textit{vk}_{5}^{\lambda _{5}^{\{5,10,11\}} \cdot c} \textit{vk}_{10}^{\lambda _{10}^{\{5,10,11\}} \cdot c} \\&= R g^{c\cdot \sum _{i\in \{5,10,11\}} \lambda _i^{\{5,10,11\}}\cdot \textit{sk}_i} = R \cdot \textit{vk}^c \;, \end{aligned}$$

which implies (M, (Rz)) is valid for \(\textit{vk}\). Also, it is clear that \(\textrm{S}_2( lr ) = \{11\}\) and \(\textrm{S}_4( lr ) = \{11, 15, 20\}\), which implies the condition \(\texttt{tf}_{4}( lr )\) does not hold. Therefore, \(\mathcal{A}\) wins the game with probability 1.

The reason why the attack is possible for \(\textsf{FROST}1\) is because the honest server 11 replies to the leader request \( lr \) with tokens \( lr .\textsf{PP}(15)\) and \( lr .\textsf{PP}(20)\) not generated by the honest servers 15 and 20 but by the adversary instead. Therefore, the attack is prevented by the general transformation from TS-SUF-\(3\) security to TS-SUF-\(4\) security described in Fig. 4 since after the transformation, an honest server replies to a leader request only when all the tokens within the request are authenticated by the corresponding servers, and thus the adversary cannot generate tokens on behalf of honest servers anymore.

5 \(\textsf{FROST}2\) with Distributed Key Generation

Fig. 10.
figure 10

\(\textsf{PedPoP}\): The Pedersen distributed key generation protocol with proofs of possession.

In this section, we prove the security of \(\textsf{FROST}2\) together with distributed key generation (DKG). In particular, we prove the security of \(\textsf{FROST}2\) with the variant of the Pedersen DKG protocol [24] with proofs of possession originally proposed in combination with \(\textsf{FROST}1\) [30]. We call this protocol \(\textsf{PedPoP}\) and provide a description in Fig. 10.

Throughout this section, we denote public keys by X, instead of \(\textit{vk}\), and corresponding secret keys by x, instead of \(\textit{sk}\). We also denote the joint public key by \(\tilde{X}\) and aggregated nonce by \(\tilde{R}\). Hash function \(\textsf{h}_i(\cdot )\) is computed as \(\textsf{h}_i(\cdot )\) for \(i=0,1,2\).

The Pedersen DKG can be viewed as \(\textsf{ns}\) parallel instantiations of Feldman verifiable secret sharing (VSS) [19], which itself is derived from Shamir secret sharing [36] but additionally requires each participant to provide a vector commitment \(\vec {C}\) to ensure their received share is consistent with all other participants’ shares. In addition, \(\textsf{PedPoP}\) requires each participant to provide a Schnorr proof of knowledge of the secret corresponding to the first term of their commitment. This is to ensure that unforgeability (but not liveness) holds even if more than half of the participants are dishonest.

We introduce the Schnorr knowledge of exponent assumption (), which we show is true under the discrete logarithm (DL) assumption in the algebraic group model (AGM) without any tightness loss. The purpose of the assumption is to ensure that the Pedersen DKG can be run in the honest minority setting, where we assume the existence of at least a single honest party and up to \(t-1\) corrupt parties. The assumption can be avoided if we assume an honest majority in the DKG. However, we prefer to allow more corruptions with the tradeoff of a stronger assumption.

The assumption allows us to prove the security of multi-party signatures in the setting where each participant is required to provide a proof of possession of their secret key during a key generation and registration phase. By formatting our desired security property directly as an assumption, we avoid the complexity of rewinding adversaries, which is required when proving security of Schnorr signatures in the ROM only, and which may result in a loss of tightness exponential in the number of parties that the adversary controls [38]. The assumption implies that if an adversary can forge a Schnorr signature for some public key, then it must know the corresponding secret key. It is a non-falsifiable assumption.

Our proof for extends a result by Fuchsbauer et al. [20], which showed that the security of Schnorr signatures can be tightly reduced to the DL assumption in the AGM. We improve on their result by considering extraction rather than forgeability and by allowing extraction even when the adversary chooses their own public key. While new to the setting of multi-party signatures, is reminiscent of prior knowledge of exponent assumptions [4, 15] employed to prove the security of Succinct NIZK arguments (SNARKs).

Fig. 11.
figure 11

Game used to define the Schnorr knowledge of exponent () assumption, where \(\mathbb {G}\) is a cyclic group of order p with generator g. By \(\textsf{rl}_{\mathcal{A}}\) we denote the randomness length of \(\mathcal{A}\). \(\tilde{\textsf{h}}\) is initialized to be an empty table.

For the definition, consider the game in Fig. 11 associated to group \(\mathbb {G}\), adversary \(\mathcal{A}\), and an algorithm \(\textsf{Ext}\), called an extractor. The adversary \(\mathcal{A}\) is run with coins \(\omega \). \(\mathcal{A}\) has access to a signing oracle \(\textsc {FSignO}\) that outputs a Schnorr signature under a randomly sampled key X on the message X. (The name, Full Sign Oracle, reflects that the oracle samples a fresh public key with each invocation). It can call its challenge oracle \(\textsc {Chal}\) with a triple \((X,\bar{R},\bar{z})\). If this is not a triple returned by the full signing oracle, yet verifies as a Schnorr signature under public key X, the extractor is asked to find the discrete logarithm \(\alpha \) of X, and the adversary wins (the game sets \(\textsf{win}\) to \(\textsf{true}\)) if the extractor fails. The inputs to the extractor are the coins of the adversary, the description of the group \(\mathbb {G}\), the set \(Q_{\textsc {FSignO}}\) and a list \(\mathsf {Q_{\tilde{\textsf{h}}_0}}\). The latter, for every query \((X,X,\bar{R})\) that \(\mathcal{A}\) made to random oracle \(\tilde{\textsf{h}}_0\), stores the response of the oracle. (The length of the list is thus the number of \(\tilde{\textsf{h}}_0\) queries made by \(\mathcal{A}\)). Note that multiple queries to \(\textsc {Chal}\) are allowed, so that this captures the ability to perform multiple extractions.

Asymptotically, we would say that the assumption holds with respect to \(\mathbb {G}\) if for all PPT adversaries \(\mathcal{A}\), there exists a PPT extractor \(\textsf{Ext}\) such that , which would now be a function of the security parameter, is negligible.

The proof of the following can be found in [14]. For convenience, the statement is asymptotic. Note that the random oracle model is implicit through \(\tilde{\textsf{h}}_0\) being a random oracle in the game of Fig. 11.

Theorem 8

( ). The assumption with respect to the group \(\mathbb {G}\) is implied by the DL assumption with respect to \(\mathbb {G}\) in the AGM.

\(\underline{\text {TS-UF-}0\ \textsc {Security.}}\) In terms of our framework, our proof of \(\textsf{FROST}2 + \textsf{PedPoP}\) considers a single honest player and so aligns with the notion of \(\text {TS-UF-}0\) security defined in Fig. 2. Since we now consider distributed key generation instead of trusted key generation, the initialization oracle \(\textsc {Init}\) is replaced with a single execution of \(\textsf{PedPoP}.\textsf{KeyGen}\) as defined in Fig. 10. The proofs of possession required by \(\textsf{PedPoP}.\textsf{KeyGen}\) ensure that the simulator in our security reduction is able to extract sufficient information (via the assumption) to simulate signing as in the \(\text {TS-UF-}0\) definition, in which all secret key material is generated by the simulator directly. The signing oracles \(\textsc {PPO}\) and \(\textsc {PSignO}\) remain identical to Fig. 2. We have not currently investigated whether \(\text {TS-UF-}2\) security holds.

5.1 Security of \(\textsf{FROST}2 + \textsf{PedPoP}\)

We now prove the security of \(\textsf{FROST}2\) with distributed key generation protocol \(\textsf{PedPoP}\) under the assumption and OMDL assumption in the ROM.

Theorem 9

(\(\textsf{FROST}2+\textsf{PedPoP}\)). \(\textsf{FROST}2\) with distributed key generation protocol \(\textsf{PedPoP}\) is \(\text {TS-UF-}0\) secure under the assumption and the OMDL assumption in the ROM.

We make use of an intermediary assumption, the binonce Schnorr computational (\(\textrm{Bischnorr}\)) assumption, which we define and prove secure under the OMDL assumption in the ROM (Fig. 12).

Equipped with this assumption, our proof proceeds as follows. Let \(\mathcal{A}\) be a PPT adversary attempting to break the \(\text {TS-UF-}0\) security of \(\textsf{FROST}2\). We construct a PPT adversary \(\mathcal{B}_1\) playing game and thence, from the assumption, obtain an extractor \(\textsf{Ext}\) for it. We construct a PPT adversary \(\mathcal{B}_2\) playing game such that whenever \(\mathcal{A}\) outputs a valid forgery, either \(\mathcal{B}_1\) breaks the assumption or \(\mathcal{B}_2\) breaks the \(\textrm{Bischnorr}\) assumption. Formally, for security parameter \(\kappa \), we have

Fig. 12.
figure 12

Game used to define the binonce Schnorr computational (\(\textrm{Bischnorr}\)) assumption, where \(\mathbb {G}\) is a cyclic group of order p with generator g. \(\hat{\textsf{h}}\) is initialized to be an empty table.

The \(\textrm{Bischnorr}\) assumption equips an adversary with two oracles, \(\textsc {BinonceO}\) and \(\textsc {BisignO}\), and two hash functions, \(\hat{\textsf{h}}_1\) and \(\hat{\textsf{h}}_2\), and asks it to forge a new Schnorr signature with respect to a challenge public key \(\dot{X}\). The \(\textsc {BinonceO}\) oracle takes no input and responds with two random nonces (RS). The \(\textsc {BisignO}\) oracle takes as input an index k, a message \(M\), and a set of nonces and scalars \(\{(\gamma _i, R_i, S_i)\}_{i \in SS }\). It checks that \((R_k, S_k)\) is a \(\textsc {BinonceO}\) response and that it has not been queried on \((R_k, S_k)\) before. It returns an error if not. It then computes an aggregated randomized nonce \(\tilde{R}= \prod _{i \in SS } R_i S_i^d\), where \(d= {\hat{\textsf{h}}_1(\dot{X}, M, \{(\gamma _i, R_i, S_i)\}_{i \in SS })}\). \(\textsc {BisignO}\) then returns z such that \((\tilde{R}, z)\) is a valid Schnorr signature with respect to \(\hat{\textsf{h}}_2\). The adversary wins if it can output a verifying \((M^*, R^*,z^*)\) that was not output by \(\textsc {BisignO}\).

The oracle \(\textsc {BisignO}\) can only be queried once for each pair of nonces (RS) output by \(\textsc {BinonceO}\). The index k denotes which \((\gamma _k, R_k, S_k)\) out of the list \(\{ (\gamma _i, R_i, S_i) \}_{i \in SS }\) is being queried; the remaining scalars and nonces appear only to inform \(\textsc {BinonceO}\) what to include as input to \(\hat{\textsf{h}}_1\). The scalar \(\gamma _k\) allows the response \(z_k\) to be given as \(z_k = r_k + d\cdot s_k + c \cdot \gamma _k \cdot \dot{x}\), as opposed to \(r_k + d\cdot s_k + c \cdot \dot{x}\). This is useful for threshold signatures, where \(\gamma _k\) corresponds to the Lagrange coefficient. Note that \(\{\gamma _i\}_{i \in SS }\) (in addition to the nonces) must be included as input to \(\hat{\textsf{h}}_1\) or else there is an attack.

Asymptotically, we would say that the \(\textrm{Bischnorr}\) assumption holds with respect to \(\mathbb {G}\) if for all PPT adversaries \(\mathcal{A}\), we have that , which would now be a function of the security parameter, is negligible.

The proof of the following can be found in [14]. For convenience, the statement is asymptotic.

Lemma 10

(\(\text {OMDL}\Rightarrow \textrm{Bischnorr}\)). Let \(\hat{\textsf{h}}_1, \hat{\textsf{h}}_2\) be random oracles. The \(\textrm{Bischnorr}\) assumption is implied by the OMDL assumption with respect to the group \(\mathbb {G}\) and \(\hat{\textsf{h}}_1, \hat{\textsf{h}}_2\).

Equipped with this assumption, we are now ready to prove Theorem 9.

Proof

(of Theorem 9). Let \(\mathcal{A}\) be a PPT adversary attempting to break the security of \(\textsf{FROST}2\). We construct a PPT adversary \(\mathcal{B}_1\) playing game and thence, from the assumption, obtain an extractor \(\textsf{Ext}\) for it. We construct a PPT adversary \(\mathcal{B}_2\) playing game such that whenever \(\mathcal{A}\) outputs a valid forgery, either \(\mathcal{B}_1\) breaks the assumption or \(\mathcal{B}_2\) breaks the \(\textrm{Bischnorr}\) assumption. Formally, for security parameter \(\kappa \), we have

\(\underline{\mathbf {The\ Reduction}\ \mathcal{B}_1{:}}\) We first define the reduction \(\mathcal{B}_1\) against . Let \( CS = \{ \textsf{id}_j\} \) be the set of corrupt parties, and let \( HS = \{ \textsf{id}_k\}\) be the set of honest parties. Assume that \(| CS |= t-1\) and \(| HS |= \textsf{ns}-(t-1)\). We will show that when \(\textsf{PedPoP}\) outputs public key share \(\tilde{X}_{k} = g^{\bar{x}_k}\) for each honest party \({\textsf{id}_k \in HS }\), \(\mathcal{B}_1\) returns \((\alpha _k, \beta _k)\) such that \(\tilde{X}_{k} = \dot{X}^{\alpha _k} g^{\beta _k}\). \(\mathcal{B}_1\) is responsible for simulating honest parties in \(\textsf{PedPoP}\) (Fig. 10) and queries to \(\textsf{h}_0\), \(\textsf{h}_1\), and \(\textsf{h}_2\). \(\mathcal{B}_1\) receives as input a group \(\mathbb {G}\) and random coins \(\omega \). It can query the random oracle \(\textsc {RO}\) from . It can also query \(\textsc {FSignO}\) to receive signatures under \(\tilde{\textsf{h}}_0\) and \(\textsc {Chal}\) on inputs \((X,\bar{R},\bar{z})\) to challenge the extractor \(\textsf{Ext}\) to output a discrete logarithm \(\alpha \) for X.

Initialization. \(\mathcal{B}_1\) may program \(\textsf{h}_0, \textsf{h}_1\), and \(\textsf{h}_2\), but not \(\tilde{\textsf{h}}_0\) (because it is part of \(\mathcal{B}_1\)’s challenge). Let \(\mathsf {Q_{\textsf{h}_0}}\) be the set of \(\textsf{h}_0\) queries and their responses. \(\mathcal{B}_1\) first queries \(\textsc {FSignO}\) and receives \((\dot{X}, \bar{R}, \bar{z})\). \(\mathcal{B}_1\) computes \(\alpha _k\) for each honest party \(\textsf{id}_k \in HS \) as follows. First, \(\mathcal{B}_1\) computes the t Lagrange polynomials \(\{L_k'(Z), \{L_{j}'(Z)\}_{\textsf{id}_j \in CS }\}\) relating to the set \(\textsf{id}_k \cup CS \). Then, \(\mathcal{B}_1\) sets \(\alpha _k \leftarrow L'_k(0)^{-1}\). (It will become clear why \(\alpha _k\) is computed this way).

Hash Queries. \(\mathcal{B}_1\) handles \(\mathcal{A}\)’s hash queries throughout the DKG protocol as follows.

\(\underline{\textsf{h}_0{:}}\) When \(\mathcal{A}\) queries \(\textsf{h}_0\) on \((X, X, \bar{R})\), \(\mathcal{B}_1\) checks whether \((X, X, \bar{R}, \bar{c}) \in \mathsf {Q_{\textsf{h}_0}}\) and, if so, returns \(\bar{c}\). Else, \(\mathcal{B}_1\) queries \(\bar{c}\leftarrow \tilde{\textsf{h}}_0(X, X, \bar{R})\), appends \((X, X, \bar{R}, \bar{c})\) to \(\mathsf {Q_{\textsf{h}_0}}\), and returns \(\bar{c}\).

\(\underline{\textsf{h}_1{:}}\) When \(\mathcal{A}\) queries \(\textsf{h}_1\) on \((X, lr ) = (X, M, \{(\textsf{id}_i, R_i, S_i)\}_{i \in SS })\), \(\mathcal{B}_1\) queries \(\hat{d} \leftarrow \tilde{\textsf{h}}_1(X, M, \{(\textsf{id}_i, R_i, S_i)\}_{i \in SS })\) and returns \(\hat{d}\).

\(\underline{\textsf{h}_2{:}}\) When \(\mathcal{A}\) queries \(\textsf{h}_2\) on \((X, M, R)\), \(\mathcal{B}_1\) queries \(\hat{c} \leftarrow \tilde{\textsf{h}}_2(X, M, R)\) and returns \(\hat{c}\).

Simulating the DKG. \(\mathcal{B}_1\) runs \(\mathcal{A}\) on input coins \(\omega \) and simulates \(\textsf{PedPoP}\) as follows. \(\mathcal{B}_1\) embeds \(\dot{X}\) as the public key of the honest party that the adversary queries first. Let this first honest party be \(\textsf{id}_\tau \). \(\mathcal{B}_1\) simulates the public view of \(\textsf{id}_\tau \) but follows the \(\textsf{PedPoP}\) protocol for all other honest parties \(\{\textsf{id}_k\}_{k \ne \tau }\) as prescribed. Note that \(\mathcal{A}\) can choose the order in which it interacts with honest parties, so \(\mathcal{B}_1\) must be able to simulate any of them.

Honest Party \(\textsf{id}_\tau \). \(\mathcal{B}_1\) is required to output

$$\begin{aligned} (\bar{R}_\tau , \bar{z}_\tau ), \vec {C}_\tau = (A_{\tau ,0} = X_{\tau ,0}, A_{\tau ,1}, ..., A_{\tau ,t-1}) \end{aligned}$$

that are indistinguishable from valid outputs as well as \(t-1\) shares \(f_\tau (\textsf{id}_j) = \bar{x}_{\tau ,j}\), one to be sent to each corrupt party \(\textsf{id}_j \in CS \). Here, \((\bar{R}_\tau , \bar{z}_\tau )\) is a Schnorr signature proving knowledge of the discrete logarithm of \(X_{\tau ,0}\), and \(\vec {C}_\tau \) is a commitment to the coefficients that represent \(f_\tau \). \(\mathcal{B}_1\) simulates honest party \(\textsf{id}_\tau \) as follows.

  1. 1.

    \(\mathcal{B}_1\) sets the public key \(X_{\tau ,0} \leftarrow \dot{X}\).

  2. 2.

    \(\mathcal{B}_1\) simulates a verifiable Shamir secret sharing of the discrete logarithm of \(\dot{X}\) by performing the following steps.

    1. (a)

      \(\mathcal{B}_1\) samples \(t-1\) random values for \(\textsf{id}_j \in CS \).

    2. (b)

      Let \(f_\tau \) be the polynomial whose constant term is the challenge \(f_\tau (0) = \dot{x}\) and for which \(f_\tau (\textsf{id}_j) = \bar{x}_{\tau ,j}\) for all \(\textsf{id}_j \in CS \). \(\mathcal{B}_1\) computes the t Lagrange polynomials \(\{L_0'(Z), \{L_{j}'(Z)\}_{\textsf{id}_j \in CS }\}\) relating to the set \(0\cup CS \).

    3. (c)

      For \(1 \le i \le t-1\), \(\mathcal{B}_1\) computes

      $$\begin{aligned} A_{\tau ,i} = \dot{X}^{L'_{0,i}} \prod _{\textsf{id}_j \in CS } g^{\bar{x}_{\tau ,j} \cdot L'_{j, i}} \end{aligned}$$
      (1)

      where \(L'_{j, i}\) is the \(i^{th}\) coefficient of \(L'_j(Z) = L'_{j,0} + L'_{j,1} Z + \dots + L'_{j,t-1} Z^{t-1}\).

    4. (d)

      \(\mathcal{B}_1\) outputs \((\bar{R}_\tau , \bar{z}_\tau ), \vec {C}_\tau = (A_{\tau ,0} = X_{\tau ,0}, A_{\tau ,1}, ..., A_{\tau ,t-1})\) for the broadcast round, and then sends shares \(\bar{x}_{\tau , j}\) for each \(\textsf{id}_j \in CS \).

  3. 3.

    \(\mathcal{B}_1\) simulates private shares \(f_\tau (\textsf{id}_k) = \bar{x}_{\tau ,k}\) for honest parties \(\textsf{id}_k \in HS \) by computing \(\alpha '_k, \beta '_k\) such that \(g^{\bar{x}_{\tau ,k}} = \dot{X}^{\alpha '_k} g^{\beta '_k}\). First, \(\mathcal{B}_1\) computes the t Lagrange polynomials \(\{L_k'(Z), \{L_{j}'(Z)\}_{\textsf{id}_j \in CS }\}\) relating to the set \(\textsf{id}_k \cup CS \). Then, implicitly,

    $$ f_\tau (0) = \dot{x}= \bar{x}_{\tau ,k} \cdot L'_k(0) + \sum _{\textsf{id}_j \in CS } \bar{x}_{\tau ,j} \cdot L_j'(0) $$

    Solving for \(\bar{x}_{\tau ,k}\), \(\mathcal{B}_1\) sets \(\alpha '_k = L_k'(0)^{-1}\) and \(\beta '_k = - \alpha '_k \sum _{\textsf{id}_j \in CS } \bar{x}_{\tau ,j} \cdot L_j'(0)\).

All Other Honest Parties. For all other honest parties \(\textsf{id}_k \in HS , k \ne \tau \), \(\mathcal{B}_1\) follows the protocol. \(\mathcal{B}_1\) samples and sets \(A_{k,i} \leftarrow g^{a_{k,i}}\) for all \(i \in [0..t-1]\). \(\mathcal{B}_1\) provides a proof of possession \((\bar{R}_k, \bar{z}_k)\) of the public key \(X_{k,0} = A_{k,0}\) and computes the private shares \(\bar{x}_{k,i} = f_k(\textsf{id}_i)\).

Adversarial Contributions. When \(\mathcal{A}\) returns a contribution

$$\begin{aligned} (\bar{R}_j, \bar{z}_j), \vec {C}_j = (A_{j,0} = X_{j,0}, A_{j,1}, ..., A_{j,t-1}) \end{aligned}$$

if \((X_{j,0}, \bar{R}_j, \bar{z}_j)\) verifies (i.e., \(g^{\bar{z}_j} = \bar{R}_j X_{j,0}^{\tilde{\textsf{h}}_0(X_{j,0}, X_{j,0}, \bar{R}_j)}\)) and \(X_{j,0} \ne \dot{X}\), then \(\mathcal{B}_1\) queries \(\textsc {Chal}( X_{j,0}, \bar{R}_j, \bar{z}_j )\) from the game.

Complaints. If \(\mathcal{A}\) broadcasts a complaint, \(\mathcal{B}_1\) reveals the relevant \(\bar{x}_{k,j}\). If \(\mathcal{A}\) does not send verifying \(\bar{x}_{j,k}\) to party \(\textsf{id}_k \in HS \), then \(\mathcal{B}_1\) broadcasts a complaint. If \(\bar{x}_{j,k}\) fails to satisfy the equation, or should \(\mathcal{A}\) not broadcast a share at all, then \(\textsf{id}_j\) is disqualified.

DKG Termination. When \(\textsf{PedPoP}\) terminates, the output is the joint public key \(\tilde{X}= \prod _{i=0}^{\textsf{ns}} X_{i,0}\). \(\mathcal{B}_1\) simulates private shares \(\bar{x}_k\) for honest parties \(\textsf{id}_k \in HS \) by computing \(\alpha _k, \beta _k\) such that \(\tilde{X}_k = g^{\bar{x}_k} = \dot{X}^{\alpha _k} g^{\beta _k}\). Implicitly, \(\bar{x}_k = \bar{x}_{\tau ,k} + \sum _{i=1,i \ne \tau }^{\textsf{ns}} \bar{x}_{i,k}\) and \(\bar{x}_{\tau ,k} = \dot{x} \cdot \alpha '_k + \beta '_k\) from Step 3 above, so \(\alpha _k = \alpha '_k\) and \(\beta _k = \beta '_k + \sum _{i=1,i \ne \tau }^{\textsf{ns}} \bar{x}_{i,k}\). \(\mathcal{B}_1\) returns \( \{a_k\}_{\textsf{id}_k \in HS , k \ne \tau }, \{(\alpha _k, \beta _k)\}_{\textsf{id}_k \in HS }\).

We now argue that: (1) \(\mathcal{A}\) cannot distinguish between a real run of the DKG protocol and its interaction with \(\mathcal{B}_1\); and (2) \(\textsf{Ext}(\mathbb {G}, \omega , Q_{\textsc {FSignO}}, \mathsf {Q_{\tilde{\textsf{h}}_0}})\) outputs \(a_{j,0}\) such that \(X_{j,0} = g^{a_{j,0}}\) whenever \(\mathcal{B}_1\) queries \(\textsc {Chal}(X_{j,0}, \bar{R}_j, \bar{z}_j )\).

(1) See that \(\mathcal{B}_1\)’s simulation of \(\textsf{PedPoP}\) is perfect, as performing validation of each player’s share (Step 4 in Fig. 10) holds, and by Eq. 1, interpolation in the exponent correctly evaluates to the challenge \(\dot{X}\).

(2) See that \(\textsf{h}_0( X_{j,0}, X_{j,0}, \bar{R}_j ) = \tilde{\textsf{h}}_0( X_{j,0}, X_{j,0}, \bar{R}_j )\) unless \((X_{j,0}, \bar{R}_j) = (\dot{X}, \bar{R}_\tau )\). The latter happens only if \(X_{j,0} = X_{\tau ,0}\), but in this case \(\textsf{PedPoP}\) will not terminate. We thus have that \((X_{j,0}, \bar{R}_j, \bar{z}_j)\) is a verifying signature under \(\tilde{\textsf{h}}_0\) and either \(\textsf{Ext}\) succeeds, or \(\mathcal{B}_1\) breaks the assumption. Therefore, the probability of the event occurring where \(\textsf{Ext}\) fails to outputs \(a_{j,0}\) is bounded by .

\(\underline{\mathbf {The\ Reduction}\ \mathcal{B}_2{:}}\) We next define the reduction \(\mathcal{B}_2\) against \(\textrm{Bischnorr}\). We will show that when \(\textsf{PedPoP}\) outputs the joint public key \(\tilde{X}\), \(\mathcal{B}_2\) returns y such that \(\tilde{X}= \dot{X}g^{y}\). Together with the \((\alpha _k, \beta _k)\) returned by \(\mathcal{B}_1\) such that \(\tilde{X}_{k} = \dot{X}^{\alpha _k} g^{\beta _k}\), this representation allows \(\mathcal{B}_2\) to simulate \(\textsf{FROST}2\) signing under each \(\tilde{X}_k\). \(\mathcal{B}_2\) is responsible for simulating honest parties during signing and queries to \(\textsf{h}_0\), \(\textsf{h}_1\), and \(\textsf{h}_2\). \(\mathcal{B}_2\) receives as input a group \(\mathbb {G}\) and a challenge public key \(\dot{X}\). It can query \(\textsc {RO}\), \(\textsc {BinonceO}\), \(\textsc {BisignO}\) from the \(\textrm{Bischnorr}\) game.

Initialization. \(\mathcal{B}_2\) may program \(\textsf{h}_0, \textsf{h}_1\), and \(\textsf{h}_2\), but not \(\hat{\textsf{h}}_1\) or \(\hat{\textsf{h}}_2\) (because they are part of \(\mathcal{B}_2\)’s challenge). Let \(Q_{\textsc {PPO}}\) be the set of \(\textsc {PPO}\) queries and responses in the pre-processing round, and let \(Q_{\textsc {PSignO}}\) be the set of \(\textsc {PSignO}\) queries and responses in the signing round.

DKG Extraction. \(\mathcal{B}_2\) first simulates a Schnorr proof of possession of \(\dot{X}\) as follows. \(\mathcal{B}_2\) samples , computes \(\bar{R}_\tau \leftarrow g^{\bar{z}_\tau } \dot{X}^{-\bar{c}_\tau }\), and appends \((\dot{X}, \dot{X}, \bar{R}_\tau , \bar{c}_\tau )\) to \(\mathsf {Q_{\textsf{h}_0}}\). Then, \(\mathcal{B}_2\) runs

$$\begin{aligned} \{a_{k,0}\}_{\textsf{id}_k \in HS , k \ne \tau }, \{(\alpha _k, \beta _k)\}_{\textsf{id}_k \in HS } \leftarrow \mathcal{B}_1(\mathbb {G}; \omega ) \end{aligned}$$

on coins \(\omega \). \(\mathcal{B}_2\) handles \(\mathcal{B}_1\)’s queries as follows. When \(\mathcal{B}_1\) queries \(\tilde{\textsf{h}}_0\) on \((X, X, \bar{R})\), \(\mathcal{B}_2\) checks whether \((X, X, \bar{R}, \bar{c}) \in \mathsf {Q_{\tilde{\textsf{h}}_0}}\) and, if so, returns \(\bar{c}\). Else, \(\mathcal{B}_2\) queries \(\bar{c}\leftarrow \hat{\textsf{h}}_0(X, X, \bar{R})\), appends \((X, X, \bar{R}, \bar{c})\) to \(\mathsf {Q_{\tilde{\textsf{h}}_0}}\), and returns \(\bar{c}\). When \(\mathcal{B}_1\) queries \(\tilde{\textsf{h}}_1, \tilde{\textsf{h}}_2\), \(\mathcal{B}_2\) handles them the same way it handles \(\mathcal{A}\)’s \(\textsf{h}_1, \textsf{h}_2\) queries, described below. The first time \(\mathcal{B}_1\) queries its \(\textsc {FSignO}\) oracle, \(\mathcal{B}_2\) returns \((\dot{X}, \bar{R}_\tau , \bar{z}_\tau )\). When \(\mathcal{B}_1\) queries \(\textsc {Chal}(X_{j,0}, \bar{R}_j, \bar{z}_j)\), \(\mathcal{B}_2\) runs \(a_{j,0} \leftarrow \textsf{Ext}(\mathbb {G}, \omega , Q_{\textsc {FSignO}}, \mathsf {Q_{\tilde{\textsf{h}}_0}})\) to obtain \(a_{j,0}\) such that \(X_{j,0} = g^{a_{j,0}}\) and aborts otherwise. Then \(y = \sum _{i = 1, i \ne \tau }^{\textsf{ns}} a_{i,0}\) such that \(\tilde{X}= \dot{X}g^y\).

\(\underline{\mathcal{A}{\textbf {'s}}\ {\textbf {Hash}}\ {\textbf {Queries.}}}\) \(\mathcal{B}_2\) handles \(\mathcal{A}\)’s hash queries throughout the signing protocol as follows.

\(\underline{\textsf{h}_0{:}}\) When \(\mathcal{A}\) queries \(\textsf{h}_0\) on \((X, X, \bar{R})\), \(\mathcal{B}_2\) checks whether \((X, X, \bar{R}, \bar{c}) \in \mathsf {Q_{\textsf{h}_0}}\) and, if so, returns \(\bar{c}\). Else, \(\mathcal{B}_2\) queries \(\bar{c}\leftarrow \hat{\textsf{h}}_0(X, X, \bar{R})\), appends \((X, X, \bar{R}, \bar{c})\) to \(\mathsf {Q_{\textsf{h}_0}}\), and returns \(\bar{c}\). Note that \(\mathcal{B}_1\) and \(\mathcal{B}_2\) share the state of \(\mathsf {Q_{\textsf{h}_0}}\).

\(\underline{\textsf{h}_1{:}}\) When \(\mathcal{A}\) queries \(\textsf{h}_1\) on \((X, lr ) = (X, M, \{(\textsf{id}_i, R_i, S_i)\}_{i \in SS })\), \(\mathcal{B}_2\) checks whether \((X, M, \{(\textsf{id}_i, R_i, S_i)\}_{i \in SS }, \hat{M}, \hat{d}) \in \mathsf {Q_{\textsf{h}_1}}\) and, if so, returns \(\hat{d}\). Else, \(\mathcal{B}_2\) checks whether there exists some \(k' \in SS \) such that \((\textsf{id}_{k'}, R_{k'}, S_{k'}) \in Q_{\textsc {PPO}}\). If not, \(\mathcal{B}_2\) samples a random message \(\hat{M}\) and a random value \(\hat{d}\), appends \((X, M, \{(\textsf{id}_{i}, R_{i}, S_{i})\}_{i \in SS }, \hat{M}, \hat{d})\) to \(\mathsf {Q_{\textsf{h}_1}}\), and returns \(\hat{d}\).

If there does exist some \(k' \in SS \) such that \((\textsf{id}_{k'}, R_{k'}, S_{k'}) \in Q_{\textsc {PPO}}\), \(\mathcal{B}_2\) computes the Lagrange coefficients \(\{\lambda _i\}_{i \in SS }\), where \(\lambda _i = L_i(0)\) and \(\{L_i(Z)\}_{i \in SS }\) are the Lagrange polynomials relating to the set \(\{\textsf{id}_i\}_{i \in SS }\). \(\mathcal{B}_2\) sets \(\gamma _k = \lambda _k \cdot \alpha _k\) for all \(\textsf{id}_k \in HS \) and \(\gamma _j = \lambda _j\) for all \(\textsf{id}_j \in CS \) in the set \( SS \). \(\mathcal{B}_2\) then samples a random message \(\hat{M}\) (to prevent trivial collisions), queries \(\hat{d} \leftarrow \hat{\textsf{h}}_1(\dot{X}, \hat{M}, \{(\gamma _i, R_i, S_i)\}_{i \in SS })\), and appends \((X, M, \{(\textsf{id}_i, R_i, S_i)\}_{i \in SS }, \hat{M}, \hat{d})\) to \(\mathsf {Q_{\textsf{h}_1}}\). \(\mathcal{B}_2\) computes \(\hat{R} = \prod _{i \in SS } R_i S_i^{\hat{d}}\) and checks if there exists a record \((X, M, \hat{R}, \hat{M}, \hat{c}) \in \mathsf {Q_{\textsf{h}_2}}\). If so, \(\mathcal{B}_2\) aborts. Else, \(\mathcal{B}_2\) queries \(\hat{c} \leftarrow \hat{\textsf{h}}_2(\dot{X}, \hat{M}, \hat{R} )\) and appends \((\tilde{X}, M, \hat{R}, \hat{M}, \hat{c})\) to \(\mathsf {Q_{\textsf{h}_2}}\). Finally, \(\mathcal{B}_2\) returns \(\hat{d}\).

\(\underline{\textsf{h}_2{:}}\) When \(\mathcal{A}\) queries \(\textsf{h}_2\) on \((X, M, R)\), \(\mathcal{B}_2\) checks whether \((X, M, R, \hat{M}, \hat{c})\) \(\in \mathsf {Q_{\textsf{h}_2}}\) and, if so, returns \(\hat{c}\). Else, \(\mathcal{B}_2\) samples a random message \(\hat{M}\), queries \(\hat{c} \leftarrow \hat{\textsf{h}}_2(\dot{X}, \hat{M}, R)\), appends \((X, M, R, \hat{M}, \hat{c})\) to \(\mathsf {Q_{\textsf{h}_2}}\), and returns \(\hat{c}\).

\(\underline{{\textbf {Simulating}}\ \textsf{FROST}2\ {\textbf {Signing.}}}\) After \(\mathcal{B}_1\) completes the simulation of \(\textsf{PedPoP}\), \(\mathcal{B}_2\) then simulates honest parties in the \(\textsf{FROST}2\) signing protocol.

Pre-processing Round. When \(\mathcal{A}\) queries \(\textsc {PPO}\) on \(\textsf{id}_k \in HS \), \(\mathcal{B}_2\) queries \(\textsc {BinonceO}\) to get \((R_k, S_k)\), appends \((\textsf{id}_k, R_k, S_k)\) to \(Q_{\textsc {PPO}}\), and returns \((R_k, S_k)\).

Signing Round. When \(\mathcal{A}\) queries \(\textsc {PSignO}\) on \((k', lr ) = (k', M, \{(\textsf{id}_i, R_i, S_i)\}_{i \in SS })\), \(\mathcal{B}_2\) first checks whether \((\textsf{id}_{k'}, R_{k'}, S_{k'}) \in Q_{\textsc {PPO}}\) and, if not, returns \(\bot \). Then, \(\mathcal {B}\) checks whether \((R_{k'}, S_{k'}) \in Q_{\textsc {PSignO}}\) and, if so, returns \(\bot \).

If all checks pass, \(\mathcal{B}_2\) internally queries \(\hat{\textsf{h}}_1\) on \((\tilde{X}, M, \{(\textsf{id}_i, R_i, S_i)\}_{i \in SS })\) to get \( \hat{d}'\) and looks up \(\hat{M}'\) such that \((\tilde{X}, M, \{(\textsf{id}_i, R_i, S_i)\}_{i \in SS }), \hat{M}', \hat{d}') \in \mathsf {Q_{\textsf{h}_1}}\). \(\mathcal{B}_2\) computes \(\hat{R}' = \prod _{i \in SS } R_i S_i^{\hat{d}'}\) and internally queries \(\hat{\textsf{h}}_2\) on \((\tilde{X}, M, \hat{R}')\) to get \(\hat{c}'\).

Next, \(\mathcal{B}_2\) computes the Lagrange coefficients \(\{\lambda _i\}_{i \in SS }\), where \(\lambda _i = L_i(0)\) and \(\{L_i(Z)\}_{i \in SS }\) are the Lagrange polynomials relating to the set \(\{\textsf{id}_i\}_{i \in SS }\). \(\mathcal{B}_2\) sets \(\gamma _k = \lambda _k \cdot \alpha _k\) for all \(\textsf{id}_k \in HS \) and \(\gamma _j = \lambda _j\) for all \(\textsf{id}_j \in CS \) in the set \( SS \). Then, \(\mathcal{B}_2\) queries \(\textsc {BisignO}\) on \((k', \hat{M}', \{(\gamma _i, R_i, S_i)\}_{i \in SS })\) to get \(z_{k'}\). Finally, \(\mathcal{B}_2\) computes

$$\begin{aligned} \tilde{z}_{k'} = z_{k'} + \hat{c}' \cdot \lambda _{k'} \cdot \beta _{k'} \end{aligned}$$
(2)

For \(\mathcal{A}\)’s query to \(\textsc {PSignO}\), \(\mathcal{B}_2\) returns \(\tilde{z}_{k'}\).

Output. When \(\mathcal{A}\) returns \((\tilde{X}, M^*, sig ^*)\) such that \( sig ^* = (\tilde{R}^*, z^*)\) and \(\textsf{Vf}(\tilde{X}, M^*, sig ^*) = 1\), \(\mathcal{B}_2\) computes its output as follows. \(\mathcal{B}_2\) looks up \(\hat{M}^*\) such that \((\tilde{X}, M^*, \tilde{R}^*, \hat{M}^*, \hat{c}^*) \in \mathsf {Q_{\textsf{h}_2}}\) and outputs \((\hat{M}^*, \tilde{R}^*, z^* - \hat{c}^* \cdot y)\).

To complete the proof, we must argue that: (1) \(\mathcal{B}_2\) only aborts with negligible probability; (2) \(\mathcal{A}\) cannot distinguish between a real run of the protocol and its interaction with \(\mathcal{B}_2\); and (3) whenever \(\mathcal{A}\) succeeds, \(\mathcal{B}_2\) succeeds.

(1) \(\mathcal{B}_2\) aborts if \(\textsf{Ext}\) fails to return \(a_{j,0}\) such that \(X_{j,0} = g^{a_{j,0}}\) for some j. This happens with maximum probability .

\(\mathcal{B}_2\) aborts if \(\mathcal{A}\) queries \(\textsf{h}_2\) on \((\tilde{X}, M, \prod _{i \in SS } R_i S_i^{\hat{d}})\) before having first queried \(\textsf{h}_1\) on \((\tilde{X}, M, \{(\textsf{id}_i, R_i, S_i)\}_{i \in SS })\). This requires \(\mathcal{A}\) to have guessed \(\hat{d}\) ahead of time, which occurs with negligible probability \(q_H/p\).

(2) As long as \(\mathcal{B}_2\) does not abort, \(\mathcal{B}_2\) is able to simulate the appropriate responses to \(\mathcal{A}\)’s oracle queries so that \(\mathcal{A}\) cannot distinguish between a real run of the protocol and its interaction with \(\mathcal{B}_2\).

Indeed, \(\mathcal{B}_1\)’s simulation of \(\textsf{PedPoP}\) is perfect.

When \(\mathcal{A}\) queries \(\textsf{h}_2\) on \((X, M, R)\), \(\mathcal{B}_2\) queries \(\hat{c} \leftarrow \hat{\textsf{h}}_2(\dot{X}, \hat{M}, R)\) on a random message \(\hat{M}\). The random message prevents trivial collisions; for example, if \(\mathcal{A}\) were to query \(\textsf{h}_2\) on \((X, M, R)\) and \((X', M, R)\), where \(X' \ne X\), \(\mathcal{A}\) would receive the same value \(c \leftarrow \hat{\textsf{h}}_2(\dot{X}, M, R)\) for both and would know it was operating inside a reduction. Random messages ensure that the outputs are random, so \(\mathcal{A}\)’s view is correct. \(\mathcal{B}_2\) also ensures that \(\mathcal{A}\) receives \(\textsf{h}_1\) values that are consistent with \(\textsf{h}_2\) queries.

After the signing rounds have been completed, \(\mathcal{A}\) may verify the signature share \(\tilde{z}_{k'}\) on \(M\) as follows. \(\mathcal{A}\) checks if

$$\begin{aligned} g^{\tilde{z}_{k'}} = R_{k'} S_{k'}^{\textsf{h}_1(\tilde{X}, M, \{(\textsf{id}_i, R_i, S_i)\}_{i \in SS })} \tilde{X}_{k'}^{\lambda _{k'} \textsf{h}_2(\tilde{X}, M, \prod _{i \in SS } R_i S_i^{\textsf{h}_1(\tilde{X}, M, \{(\textsf{id}_i, R_i, S_i)\}_{i \in SS })})} \end{aligned}$$
(3)

When \(\mathcal{B}_2\) queried \(\textsc {BisignO}\) on \((k', \hat{M}', \{(\gamma _i, R_i, S_i)\}_{i \in SS })\) in the Signing Round, the signature share \(z_{k'}\) was computed such that

$$\begin{aligned} g^{z_{k'}} = R_{k'} S_{k'}^{\hat{\textsf{h}}_1(\dot{X}, \hat{M}', \{(\gamma _i, R_i, S_i)\}_{i \in SS })} \dot{X}^{\gamma _{k'} \hat{\textsf{h}}_2(\dot{X}, \hat{M}', \prod _{i \in SS } R_i S_i^{\hat{\textsf{h}}_1(\dot{X}, \hat{M}', \{(\gamma _i, R_i, S_i)\}_{i \in SS })})} \end{aligned}$$

\(\mathcal {B}\) computed the signature share \(\tilde{z}_{k'}\) (Eq. 2) as

$$\begin{aligned} \tilde{z}_{k'} =\, z_{k'} + \hat{c}' \cdot \lambda _{k'} \cdot \beta _{k'}= & {} r_{k'} + d\cdot s_{k'} + \hat{c}' \cdot \gamma _{k'} \cdot \dot{x} + \hat{c}' \cdot \lambda _{k'} \cdot \beta _{k'} \\= & {} \, r_{k'} + d\cdot s_{k'} + \hat{c}' \cdot \lambda _{k'}(\alpha _{k'} \cdot \dot{x} + \beta _{k'}) \end{aligned}$$

where \(\hat{c}' = \hat{\textsf{h}}_2(\dot{X}, \hat{M}', \prod _{i \in SS } R_i S_i^{\hat{\textsf{h}}_1(\dot{X}, \hat{M}', \{(\gamma _i, R_i, S_i)\}_{i \in SS })})\). Thus, \(\tilde{z}_{k'}\) satisfies

$$\begin{aligned} g^{\tilde{z}_{k'}} = R_{k'} S_{k'}^{\hat{\textsf{h}}_1(\dot{X}, \hat{M}', \{(\gamma _i, R_i, S_i)\}_{i \in SS })} \tilde{X}_{k'}^{\lambda _{k'} \hat{\textsf{h}}_2(\dot{X}, \hat{M}', \prod _{i \in SS } R_i S_i^{\hat{\textsf{h}}_1(\dot{X}, \hat{M}', \{(\gamma _i, R_i, S_i)\}_{i \in SS })})} \end{aligned}$$
(4)

\(\mathcal{B}_2\) has programmed the hash values in Eqs. 3 and 4 to be equal and therefore simulates \(\tilde{z}_{k'}\) correctly.

(3) \(\mathcal{A}\)’s forgery satisfies \(\textsf{Vf}(\tilde{X}, M^*, sig ^*) = 1\), which implies:

$$\begin{aligned}{} & {} g^{z^*} = \tilde{R}^* (\tilde{X})^{\textsf{h}_2(\tilde{X}, M^*, \tilde{R}^*)} = \tilde{R}^* (\dot{X}g^y)^{\textsf{h}_2(\tilde{X}, M^*, \tilde{R}^*)}\\{} & {} g^{z^*- \textsf{h}_2(\tilde{X}, M^*, \tilde{R}^*) \cdot y} = \tilde{R}^* \dot{X}^{\textsf{h}_2(\tilde{X}, M^*, \tilde{R}^*)} \end{aligned}$$

At some point, \(\mathcal{A}\) queried \(\textsf{h}_2\) on \((\tilde{X}, M^*, \tilde{R}^*)\) and received one of two values: (1) \(\hat{c}^* \leftarrow \hat{\textsf{h}}_2(\dot{X}, \hat{M}^*, \prod _{i \in SS ^*} R_i^* (S_i^*)^{\hat{d}^*})\) related to a query \(\mathcal{A}\) made to \(\textsf{h}_1\) on \((M^*, \{(\textsf{id}_i^*, R_i^*, S_i^*)\}_{i \in SS ^*})\), where it received \(\hat{d}^* \leftarrow \hat{\textsf{h}}_1(\dot{X}, \hat{M}^*, (\gamma _i^*, R_i^*, S_i^*)_{i \in SS ^*})\), or (2) \(\hat{c}^* \leftarrow \hat{\textsf{h}}_2(\dot{X}, \hat{M}^*, \tilde{R}^*)\) without having queried \(\textsf{h}_1\) first. In either case, \(\mathcal{B}_2\) has a record \((\tilde{X}, M^*, \tilde{R}^*, \hat{M}^*, \hat{c}^*) \in \mathsf {Q_{\textsf{h}_2}}\) such that \(\hat{c}^* \leftarrow \hat{\textsf{h}}_2(\dot{X}, \hat{M}^*, \tilde{R}^*)\). (Note that \(\mathcal{B}_2\) can check which case occurred by looking for \(\hat{M}^*\) in its \(\mathsf {Q_{\textsf{h}_1}}\) records). Thus, \(\mathcal{A}\)’s forgery satisfies

$$\begin{aligned} g^{z^*- \hat{\textsf{h}}_2(\dot{X}, \hat{M}^*, \tilde{R}^*) \cdot y}= & {} \tilde{R}^* \dot{X}^{\hat{\textsf{h}}_2(\dot{X}, \hat{M}^*, \tilde{R}^*)} \end{aligned}$$

and \(\mathcal{B}_2\)’s output \((\hat{M}^*, \tilde{R}^*, z^* - \hat{c}^* \cdot y)\) under \(\dot{X}\) is correct.