Keywords

1 Introduction

Direct Anonymous Attestation (DAA) [7] is a group type of anonymous signature scheme, which allows users in a group to sign messages such that the signatures can be verified using a group public key, and the actual signers’ identities are not revealed (beyond the fact that they belong to the group). Unlike group signatures [21], DAA signatures are not traceable, there is no group tracer who can find out which signer created a given signature. However, DAA has two properties that aim to stop a malicious signer from abusing anonymity: rogue key-based revocation and user-controlled linkability. These two properties were designed for using DAA in a remote attestation service that allows a Trusted Platform Module (TPM) to serve as a root of trust for attesting to the host platform that it is embedded in. The first property guarantees that a TPM whose key has been revealed will not be allowed to make any attestation reports. The second property allows a user to include a basename in the signature. If the same basename is used for two signatures then they can be linked, even though the anonymity of the signer is maintained. This property allows a verifier to build a revocation list based on a link token which is a deterministic function of the TPM’s key and a basename.

When using a TPM in a platform’s attestation service, the group signer’s role is split into two with a principal signer (the TPM) and an assistant signer (the host). They jointly create attestation reports on the state of the platform. These reports include information on the boot sequence and the software running in the host. These attestation reports convince a remote verifier that the computer platform it is communicating with is running on top of the trusted computing technology and using the correct software and hardware. Using DAA allows such attestations to be made in a privacy-preserving manner. That is, the verifier can check that an attestation report originates from a legitimate TPM, but it does not learn the identity of the particular TPM that generated the DAA signature.

The first RSA-based DAA scheme was standardised as part of the Trusted Computing Group’s TPM 1.2 specification [53] published in 2004. The TPM specification was updated in 2014 and this newer TPM 2.0 specification [54] supports elliptic curve based DAA (EC-DAA) and an Intel variant called Enhanced Privacy ID (EPID) [11]. All of these versions of DAA (RSA-DAA, EC-DAA and EPID) have also been standardised by ISO/IEC as standard ISO/IEC 20008-2 [42]. Since the first proposal of DAA, many extensions and works to improve security and efficiency have been proposed [8,9,10, 13, 15, 17, 18, 23, 26,27,28,29,30, 38, 59]. Researchers have also paid attention to studying the security model and proofs of DAA, e.g. [16, 27, 56, 58].

As reported by the Trusted Computing Group (TCG), which is the industry standards body that develops the TPM specifications, more than a billion devices include TPM technology; in particular almost all enterprise PCs, many servers and embedded systems make use of the TPM as trusted hardware anchors.

Authentication and attestation are important mechanisms used to protect computer systems and with increasing attention and awareness being given to privacy concerns, practical interest in DAA is growing. An anonymous attestation service is particularly important in automotive applications such as vehicle-to-vehicle communication, where the tracking of drivers should be prevented but the authenticity of the communication must also be guaranteed [39, 57]. A DAA protocol has also been integrated into the Fast IDentity Online (FIDO) authentication framework [14]. Another DAA-based application is a privacy-enhancing cloud service architecture to protect user’s data, using DAA to let users control the extent of data sharing among their service accounts [55].

DAA schemes that are currently supported by the TPM are based on either the factorization problem (for RSA-DAA) or the discrete logarithm problem (for EC-DAA and EPID). Since the factorization and discrete logarithm problems are known to be vulnerable to quantum computer attacks, all standardised DAA schemes are not post-quantum secure, i.e. an adversary with a powerful quantum computer could break the TPM’s security and privacy. There is therefore a need to update the standard DAA schemes to be quantum resistant. Many proposed post-quantum cryptographic primitives are built on the top of code-, hash-, lattice-, isogeny- and multivariate-based problems, and could possibly be used as the basis for the development of post-quantum DAA schemes. Recently, El Bansarkhani et al. [1], El Kassem et al. [34, 35, 44], Chen et al. [24], and Chen et al. [22] proposed several post-quantum DAA schemes from lattice assumptions. Due to their expensive storage and computational cost, research in lattice-based DAA is still ongoing.

Among all post-quantum approaches, the symmetric key approach is considered as the most conservative approach. The security of symmetric primitives is the most well-understood and easier to evaluate, hence it serves as a safety net if the security of other approaches were endangered by newly discovered threats. Symmetric primitives have been used to build several variants of anonymous signature schemes, such as group signatures [12, 33, 45, 52, 60, 61], ring signatures [36, 45] and EPID [5]. However, due to the use of a single Merkle tree for membership credentials in a group, these group signature and EPID schemes can only handle a small group size, which is not suitable for TPM use.

Our Contribution. In this paper, we propose the first DAA scheme from symmetric primitives, which meets all the requirements on DAA, particularly:

  • Signer splitting: To allow the DAA signer role to be split between a TPM and its host, we introduce a novel approach to splitting an MPC-in-the-Head scheme into two portions. The TPM keeps the key material secure and performs a small part of the work. Most of the work necessary is done by the host. The TPM and the host’s contributions work together seamlessly to form a DAA signature.

  • Support a large group size: our DAA scheme can support a large group size (up to \(2^{60}\)). To achieve this, we make use of a slightly modified SPHINCS+ signature rather than a Merkle signature as a group membership credential.

  • Security proof: the security of the proposed DAA scheme is proved under the Universal Composability (UC) model [16].

The remaining part of this paper is arranged as follows: Sect. 2 describes relevant preliminaries, Sect. 3 presents the proposed DAA construction, Sects. 4 to 7 provide security notions and proofs, and finally Sect. 8 concludes this paper.

2 Preliminaries

2.1 Hash-Based Signatures

Digital signature schemes can be built exclusively using cryptographic hash functions. In a hash-based signature scheme, a private key is composed of a series of randomly generated strings, while the corresponding public key is obtained by applying hash functions to the private key. Early hash-based signature schemes, such as the Lamport scheme [47] and the Winternitz scheme [48], were one-time signatures (OTS), meaning that each key pair can only be used to sign a single message. The Merkle signature scheme [48] is the first hash-based few-time signatures (FTS). It generates several OTS key pairs and aggregates their public keys using a Merkle tree. The root of the tree serves as the overall public key. Every signature uses one OTS private key, and it is comprised of the corresponding OTS and the Merkle tree authentication path for the OTS public key. As a result, the verifier can authenticate the signature using only the Merkle tree root. More recent FTS schemes, such as FORS [3], can be more efficient, as they utilize a large set of secret random strings that can be obtained from a pseudorandom function applied to the private key. Signatures are then generated by selecting elements from the set based on the message to be signed. While each signature discloses some secret strings in the set, the set size is large, and the number of signatures can be controlled to make it infeasible to forge a signature by mixing and matching secret strings from previously generated signatures.

All previously discussed multi-time signature schemes are characterized as stateful, as the signer is required to maintain a state containing information such as the number of signed messages and the keys utilized. In comparison, SPHINCS+ [3] is a stateless hash-based signature scheme. It employs a hyper-tree, i.e., a tree of trees, to organize OTS and FTS key pairs. Each SPHINCS+ signature constitutes a chain of signatures, with the initial signature \(\varSigma _0\) being generated from the message, and each subsequent signature \(\varSigma _i\) being a signature of the public key that verifies the preceding signature \(\varSigma _{i-1}\). By using the root public key, the authenticity of the signature chain can be verified. Although SPHINCS+ also has an upper limit on the number of signatures that can be generated per key pair, it can be set to an extremely large value (e.g. \(2^{60}\)), making it highly unlikely to reach this limit in practical scenarios. SPHINCS+ has been chosen as one of the three digital signature schemes by the National Institute of Standards and Technology (NIST) to become a part of its post-quantum cryptographic standard [49].

2.2 MPC-in-the-Head and Picnic-Style Signatures

This is a paradigm for zero-knowledge proofs introduced by Ishai et al. [40]. Roughly speaking, given a public value x, the prover needs to prove knowing a witness w such that \(f(w)=x\). To do so, the prover simulates, by itself, an MPC (multi-party computation) protocol between m parties that realizes f, in which w is secretly shared as an input to the parties. After simulation, the prover commits to the views and internal state of each individual party. Next, the verifier challenges the prover to open a subset of these commitments, checks them and decides whether to accept or not. If the MPC realizes f properly, then obviously this protocol is complete, meaning a valid statement will always be accepted. The protocol is also zero-knowledge because only the views and internal states of a subset of the parties are available to the verifier, and by the privacy guarantee of the underlying MPC protocol, no information about w can be leaked. For soundness, if the prover tries to prove a false statement, then the joint views of some of the parties must be inconsistent, and with some probability, the verifier can detect that. The soundness error of a single MPC run can be high, but by repeating this process independently enough times, the soundness error can be made negligible. The interactive ZK proofs can be made non-interactive through techniques such as Fiat-Shamir transformation.

There are multiple frameworks for constructing MPC-in-the-head ZK proofs, e.g., IKOS [40], ZKBoo [37], ZKB++ [20], KKW [45], Ligero++ [4], Limbo [51], BBQ [50], Banquet [2], BN++ [43], Rainer [31] and AIMer [46]. They follow the same paradigm, but are different in the underlying MPC protocols and have different concrete/asymptotic efficiency. In this paper, to describe our scheme, we do not need to touch the low level details, hence we will use MPC-in-the-head (for Boolean circuits) in an abstract way. We will use the following syntax to describe a ZK proof:

$$\begin{aligned} \pi = \mathcal {P}\{(\texttt {public}\ \texttt {params}); (\texttt {witness}) | \texttt {relation}\ \texttt {to}\ \texttt {be}\ \texttt {proved}\} \end{aligned}$$

For example, to prove the same key sk is used in two different instantiations of a pseudorandom function F with different data inputs, we write:

$$\begin{aligned} \pi = \mathcal {P}\{(C_1,P_1), (C_2,P_2)); (sk) | C_1=F(sk,P_1) \wedge C_2=F(sk,P_2)\} \end{aligned}$$

MPC-in-the-head has been used to generate signature schemes from a symmetric key setting. As the first scheme is named Picnic [19, 20, 62], this type of signature is called a Picnic-style signature, in which the secret signing key is k and the public verification key is a pair (cp), and the key pair satisfy the equation \(c=E(k, p)\) where E is a block cipher, k is a secret key, and p and c are respectively a plaintext and ciphertext block. Signing a message m essentially is to generate a non-interactive MPC-in-the-head proof of knowing the private key:

$$ \pi = \mathcal {P}\{(c, p)); (k) | c=E(k,p)\}(m) $$

Note that this signature is based on the Fiat-Shamir transformation. The message m is included as a part of the input for the challenge hash in the transformation. Again, to describe our scheme, we do not need to explain the details of the E algorithm, and any secure Picnic-style signature scheme can be used.

2.3 DAA Concept

A DAA scheme involves the following players:

  • An issuer manages the group membership, decides who can be a group member, and issues group membership credentials.

  • Group members create DAA signatures. Each member is formed by two entities: the TPM serves as a principal signer and the host an assistant signer.

  • Verifiers verify DAA signatures. A verifier also has two other roles: as a linker to check whether two given signatures using the same basename were created by the same signer or not; as a revocation authority to decide whether a group member should be removed from the group based on the verifier local revocation.

A DAA scheme consists of the following algorithms/protocols:

  • Init(n): In the initialization algorithm, the issuer takes a security parameter n as the input, and outputs a master (group) key pair \((mpk,\ msk)\). The master public key mpk is made public and the master secret key msk is stored privately by the issuer. In all other algorithms and protocols, we will assume mpk along with the security parameter n as an implicit input for all parties. The issuer also initializes its internal states.

  • Join(msk): the joining protocol is an interactive protocol between the issuer and the user (a TPM and its host) who wants to join the group. The issuer has a private input msk and the user does not have input. At the end of the protocol, the issuer outputs a decision: accept or reject. If reject, then stop. If accept, the user obtains its signing key \(gsk_u =(sk_u,\ cred_u)\) where \(sk_u\) is a secret key, and \(cred_u\) is a group membership credential. \(sk_u\) is chosen and held by the TPM, and \(cred_u\) is generated by the issuer and is given to the host. The issuer also updates its internal states.

  • Sign\((gsk_u,\ msg, \ bsn)\): the signing algorithm allows a TPM and its host to produce a signature \(\varSigma \) on a message \(msg\in \{0,1\}^*\) using its signing key \(gsk_u\). If a basename \(bsn \ne \perp \), \(\varSigma \) will include a link token.

  • Verify\((msg, bsn, \varSigma ,{\textbf {keyRL}}, {\textbf {linkRL}})\): the verification algorithm allows a verifier to verify whether a signature \(\varSigma \) is a valid signature of msg/bsn and whether the signing key has been listed on a rogue key list keyRL or whether a link token in the signature has been listed on a link revocation list linkRL.

  • Link\((msg_1,\varSigma _1,msg_1,\varSigma _2, bsn)\): the linking algorithm allows a verifier to check whether given two DAA signatures \(\varSigma _1\) and \(\varSigma _2\) with the same bsn value are signed using the same \(gsk_u\) or not.

  • Revocation the revocation algorithm allows a verifier to add a revealed signing key in keyRL and to add a link token from a signature generated by a revoked signer in linkRL.

A DAA scheme needs to satisfy multiple security requirements, including:

  • Correctness covers three aspects: (1) an honest user can successfully join the group, despite the existence of malicious users; (2) a signature generated by an honest and not revoked group member should always be valid when being verified; (3) user-controlled linkability, i.e., two valid signatures with the same bsn values and signed under the same \(gsk_u\) should be linked to each other.

  • Anonymity means that a DAA signature does not reveal the identity of its signer, i.e., an adversary cannot distinguish which one of the two honest signers has signed a targeted message while both signers and the message are at the adversary’s choice. Furthermore, given two signatures, w.r.t. two different basenames, the adversary cannot distinguish whether both signatures were created by one honest signer or two different signers.

  • Non-frameability means that even if the rest of the group, as well as the issuer and the host of an honest TPM, are corrupted, they cannot falsely attribute a signature to the TPM who did not produce it. This property covers three special cases: (1) no adversary can create a signature w.r.t. a basename that links to another signature created by an honest TPM for the same basename; (2) when the issuer and all TPMs are honest, no adversary can provide a signature on a message msg w.r.t. a basename bsn when no TPM signed this (msgbsn) pair; (3) When the issuer is honest, an adversary can only sign in the name of corrupt TPMs. More precisely, if n TPMs are corrupt, the adversary can create at most n unlinkable signatures for the same basename.

These requirements will be described in detail under the DAA UC model in Sect. 6. Note that the host in a secure DAA scheme is trusted to correctly execute the protocol and to maintain anonymity. This trust requirement is necessary, as the host is a contributor to a DAA signature, so a malicious host is able to not provide correct input or to break anonymity by demonstrating the connection between a DAA signature and the corresponding TPM’s public key and credential. We assume that the host represents the user so it is interested in creating valid DAA signatures and maintaining user privacy. However, for non-frameability, there is no requirement for the host to be trusted. Without the TPM, the host can neither receive a DAA credential nor generate a DAA signature. Several types of TPM have been considered in applications: (1) concrete hardware TPM, (2) integrated TPM, (3) firmware TPM, (4) virtual TPM, and (5) software TPM. Although the TPM tamper-resistant property level decreases from the highest case (1) to the lowest one (5), the trust requirements on the host are the same.

3 Construction

3.1 F-SPHINCS+ and M-FORS

To construct a DAA scheme from symmetric primitives, the first design choice is to select group membership credentials. A credential essentially is a signature on the user’s keys generated by the issuer. Because we use only symmetric primitives, the credential can be in the form of the following: (1) a Merkle signature; (2) a SPHINCS+ style signature; (3) a Picnic-style signature. The first option is ruled out because it cannot handle a large group size. The last option is also ruled out because of practical considerations: we have to create a ZKP on that another ZKP (i.e. the Picnic-style signature) is valid. Unfortunately, the circuit for verifying a Picnic-style signature is too big, which results in prohibitively high computation costs and large proof size. Therefore, we focused on utilizing a SPHINCS+ style signature as the group credential.

In the above descriptions, we said “SPHINCS+ style” rather than “SPHINCS+”. This is because SPHINCS+ is still too heavy when being verified in zero knowledge. The main problem comes from the WOTS+ signature scheme. In WOTS+, verification involves verifying k blocks of d-bit strings. When verified in the clear, each block requires at most \(2^d-1\) hash operations to verify and the exact number of hash operations required depends on the content of the block. However, in a zero-knowledge proof, we will have to hash each block exactly \(2^d-1\) times then choose the right hash value in the chain blindly, to ensure the verifier is oblivious about the content of the block. Hence in total, \((2^d-1)\cdot k\) hashes are required to verify a WOTS+ signature. Plug in concrete parameters, that means 510 hashes at 128-bit security, and 990 hashes at 256-bit security. The circuit implementing the hash function typically has \(10^3\) AND gates. So verifying one WOTS+ signature requires a circuit with over a million AND gates and in total we need to verify h WOTS+ signatures, where h is at least 7 in SPHINCS+.

Fig. 1.
figure 1

F-SPHINCS+ signatures.

Fig. 2.
figure 2

M-FORS signatures.

To fix the problem, we propose a new variant of SPHINCS+ called F-SPHINCS+. As depicted in Fig. 1, in F-SPHINCS+ we use a hyper-tree that is a tree of M-FORS trees. The M-FORS signature scheme is depicted in Fig. 2. Recall that FORS is a few-time signature scheme such that each key pair can be used to sign up to q signatures. M-FORS, short for Merkle FORS, differs from FORS in that, the public key is generated as the root of a Merkle tree. The leaf nodes in this Merkle tree are the root nodes of Merkle trees that authenticate each block of the hash value being signed. So with M-FORS, the hyper-tree in F-SPHINCS+ is a q-ary tree such that the public key in a child node is signed by the signing key in the parent node, and the signing key in the leaf node signs the actual message hash. An F-SPHINCS+ signature then contains a list of \(h+1\) signatures, where h is the height of the hyper-tree. The benefit of M-FORS over XMSS that is used in the original SPHINCS+ scheme is the lower verification cost. To verify a message hash that is k blocks of d-bit string, the cost is \(d\cdot k+ k-1\) hash operations. This is much less than the \((2^d-1)\cdot k\) hashes for verifying a WOTS+ signature. On the other hand, the signing time is more than that of WOTS+. However, this is a lesser concern because in our case signing will be done in the clear (while verification needs to be done with zero knowledge).

We now describe M-FORS and F-SPHINCS+. M-FORS consists of the algorithms below. For readability and the page limitation, we abstract away certain low-level details such as how the Merkle trees are built.

  • \(\textsf {keyGen}(\texttt {seed},n,d,k,\texttt {aux})\): it takes as input a random seed seed, a security parameter n, two positive integers d and k, and aux that is either an empty string or some optional data. If seed is an empty string, an n-bit random string will be chosen and assigned to it. Then a pseudorandom function prf is used to expand \(\texttt {seed}\) into k lists \(({\textbf {x}}^{(0)},\cdots ,{\textbf {x}}^{(k-1)})\), where each \({\textbf {x}}^{(i)}\) contains \(2^d\) distinct n-bit pseudorandom strings. Then \(k+1\) Merkle trees \({\textbf {T}}=(\texttt {mt}_0,\cdots ,\texttt {mt}_{k})\) are built. In particular, each of \(\texttt {mt}_0,\cdots , \texttt {mt}_{k-1}\) has \(2^d\) leaf nodes. The jth leaf node in \(\texttt {mt}_{i}\) is the hash of \({\textbf {x}}_j^{(i)}\). The leaf nodes of \(\texttt {mt}_{k}\) are \(r_0,\cdots ,r_{k-1}\) that are the roots of \((\texttt {mt}_0,\cdots ,\texttt {mt}_{k-1})\). keyGen outputs (pkskparam), such that the public key \(pk=r_k\) where \(r_k\) is the root of \(\texttt {mt}_{k}\), the private key \(sk=\texttt {seed}\), and the public parameters \(mp=(n,d,k,\texttt {aux})\).

  • \(\textsf {sign}(sk,MD,mp)\): to sign a message hash \(MD \in \{0,1\}^{k\cdot d}\), parse it into k blocks, each block is interpreted as a d-bit unsigned integers \((p_0,\cdots ,p_{k-1})\). Then for the i-th block \(p_i\), \({\textbf {x}}^{(i)}\) and \(\texttt {mt}_i\) (obtained by expanding sk) are used to generate \({\texttt {authpath}}^{(i)}\), which is the authentication path of the \(p_i\)-th leaf node in the i-th Merkle tree. Then \(({\textbf {x}}^{(i)}_{p_i}, {\texttt {authpath}}^{(i)})\) is put into the signature. The signature is a list of k pairs \(\boldsymbol{\sigma }=\{({\textbf {x}}^{(0)}_{p_0}, {\texttt {authpath}}^{(0)}),\cdots , ({\textbf {x}}^{(k-1)}_{p_{k-1}}\), \({\texttt {authpath}}^{(k-1)})\}\).

  • \(\textsf {recoverPK}(\sigma ,MD,mp)\): This algorithm outputs the public key recovered from a signature \(\sigma \) and the message hash MD. First MD is parsed into k blocks \((p_0',\cdots ,p_{k-1}')\). Then for \(0\le i\le k-1\), \(\boldsymbol{\sigma }_i=(x_i,{\texttt {authpath}}^{(i)})\) and \(p_i'\) are used to re-generate a Merkle tree root and get the value \(r_i'\) (\(p_i'\) is used to determine the order of the siblings at each layer). Finally, \(r_0',\cdots ,r_{k-1}'\) are used to compute \(\texttt {mt}_{k}'\) and its root \(r_{k}'\) is returned.

  • \(\textsf {verify}(\sigma ,pk,MD,mp)\): to verify a signature, call \(\textsf {recoverPK}(\sigma ,MD,mp)\). If the recovered public key is the same as pk, accept the signature, otherwise reject.

The hyper-tree nodes in F-SPHINCS+ are addressed by a pair (ab) where a is its layer and b is its index within the layer. The root node is at layer 0, and the layer number of all other nodes is the layer number of its parent plus 1. All nodes within a layer are viewed as an ordered list, and index each node in the list from left to right, starting from 0. F-SPHINCS+ consists of the following algorithms:

  • \(\textsf {keyGen}(n,q,h)\): This algorithm outputs (skpkfp). It takes as input a security parameter n, the degree of non-leaf nodes in the hyper-tree q, and the height of the hyper-tree h. Then it chooses dk that are the parameters for the underlying M-FORS signature scheme. The public parameters are \(fp=(n,q,h,d,k)\). It also chooses an n-bit random string as the private key sk. It generates the M-FORS key pair for the root node by calling \(\textsf {genNode}((0,0), sk,fp)\), and set the public key pk to be the M-FORS public key \(pk_{0,0}\).

  • \(\textsf {genNode}(nodeAdr, sk,fp)\): This algorithm generates a node in the hyper-tree given the address \(nodeAdr=(a,b)\). With the private key sk used as a seed, the algorithm first generates a subseed with a pseudorandom function \(\texttt {seed}_{a,b} = \textsf {prf}(seed, a||b)\), then it calls M-FORS key generation algorithm \(\textsf {M-FORS.keyGen}\) \((\texttt {seed}_{a,b}\), ndka||b). The output \((pk_{a,b},sk_{a,b},mp_{a,b})\) is the content of the node at (ab).

  • \(\textsf {mHash}(msg,gr)\):This algorithm produces message hash and the leaf node index used in generating the F-SPHINCS+ signature. The input msg is the message to be signed, gr is a random string. The algorithm produces \(MD||idx\leftarrow H_3(msg||gr)\), where \(H_3:\{0,1\}^*\rightarrow \{0,1\}^{d\cdot k+(\log _2 q)\cdot h}\) is a public hash function, MD is \(d\cdot k \) bit long and idx is interpreted as an \((\log _2 q)\cdot h\) bit long unsigned integer.

  • \(\textsf {sign}(msg,sk,fp)\): This algorithm produces the F-SPHINCS+ signature as a chain of M-FORS signature along the path from a leaf node to the root node of the hyper-tree. It chooses an n-bit random string gr. Then obtain \(MD||idx\leftarrow \textsf {mHash}(msg,gr)\). A leaf node at (hidx) is then generated by calling \(\textsf {genNode}((h,idx), sk,fp)\). The M-FORS signing key \(sk_{h,idx}\) is used to sign MD and generate \(\sigma _0\). The parent node of (hidx) is then generated by calling \(\textsf {genNode}((h-1,b), sk,fp)\) where \((h-1,b)\) is the address of the parent node. Then the parent secret key \(sk_{h-1,b}\) is used to sign the child public key \(pk_{h,idx}\), and the signature is \(\sigma _1\). Repeat the signing process until obtaining \(\sigma _h\) that is signed by \(sk_{0,0}\) on \(pk_{1,b'}\) for some \(b'\). The F-SPHINCS+ signature is then \(\varSigma =(gr,(\sigma _0,\cdots ,\sigma _h))\).

  • \(\textsf {verify}(msg,\varSigma ,pk,fp)\): This algorithm verifies every M-FORS signature chained up in \(\varSigma \). Given \(\varSigma = (gr, (\sigma _0, \cdots , \sigma _h))\), first compute \(MD||idx\leftarrow H_3(msg||gr)\). Then obtain \(pk_{0} \leftarrow \textsf {recoverPK}(\sigma _0,MD,mp_0)\), \(pk_1 \leftarrow \textsf {recoverPK}(\sigma _1,pk_0,mp_1)\), repeat until \(pk_h \leftarrow \textsf {recoverPK}(\sigma _{h},pk_{h-1},mp_h)\). If \(pk=pk_h\), accept the signature, otherwise reject.

Remark 1

In M-FORS algorithms, we use two tweakable hash functions [3] \(H_1:\{0,1\}^*\rightarrow \{0,1\}^n\) and \(H_2:\{0,1\}^*\rightarrow \{0,1\}^{d\cdot k}\). Almost all hash operations are done using \(H_1\). \(H_2\) is only used to map the k-th Merkle tree to the \(k\cdot d\)-bit M-FORS public key, so that when used in F-SPHINCS+ the public key is of the right size to be signed by the parent node. If M-FORS is to be used as a stand-alone signature scheme, these two hash functions can be the same.

Remark 2

The tweakable hash functions follow Construction 7 for tweakable hash functions in [3]. Namely, the hash of an input M is produced by calling a hash function with additional input as \(H(\texttt {P}||\texttt {ADD}||M)\), where \(\texttt {P}\) is a public hash key and \(\texttt {ADD}\) acts as the tweak. The tweak is the address where the hash operation takes place within the hyper-tree, and it is a five part string \(a_1||b_1||v||a_2||b_2\):

  • \((a_1,b_1)\), where \(0\le a_1\le h, 0\le b_1\le 2^{a_1}-1\), is the address of an hyper-tree node. Within the node, an M-FORS key pair that is based on \(k+1\) Merkle trees are stored.

  • \(0\le v\le k\) is the index of a Merkle tree in the M-FORS key pair stored in the hyper-tree node \((a_1,b_1)\). When \(0\le v\le k-1\), the Merkle tree (of height d) is used to sign the v-th block of the message; when \(v=k\), the Merkle tree (of height \(\lceil \log _2k\rceil \)) is used to accumulated the roots of all the previous Merkle trees into the public key.

  • \((a_2,b_2)\) is the address of an Merkle tree node. When \(0\le v\le k-1\), \(0\le a_2\le d\) and \(0\le b_2\le 2^{a_2}-1\); When \(v= k\), \(0\le a_2\le \lceil \log _2k\rceil -1\) and \(0\le b_2\le 2^{a_2}-1\).

The security analysis of F-SPHINCS+ is given in Sect. 4.

3.2 The DAA Scheme

Overall, the DAA signature scheme is designed in this way: the issuer generates an F-SPHINCS+ key pair as the group master key pair. When a user (including a TPM and its host) joins the group, the TPM generates a secret signing key. The issuer decides whether the user should be admitted into the group, if so a group credential is generated as an F-SPHINCS+ signature on an entry token (a commitment of the user’s signing key). The credential is accessible to the host. When signing a message, the TPM and its host work together to produce an MPC-in-the-head (MPCitH) non-interactive zero-knowledge (NIZK) proof to show it possesses a group credential and the signature is generated on the hash of the message and a random data string under the key authorized by the group credential. We have created a novel approach that allows the TPM and its host each to make a partial signature and a DAA signature is a combination of these two. In particular, the TPM proves its possession of the signing key and the host proves the credential. These two proofs are glued seamlessly in a zero-knowledge manner. Verifying the DAA signature involves checking the NIZK proof so the verifier is convinced of a group membership. Each DAA signature also includes a link token, essentially it is a pseudorandom function output of a basename bsn produced using the signing key as a secret. This link token will be used for user-controlled linkability, key-based revocation and link-based revocation.

We now present the concrete construction of algorithms and protocols.

  • Initialization Init(n): Given a security parameter n, the issuer does the following: Choose the hyper-tree node degree q and the tree height h, the values (dk) for the underlying M-FORS scheme, a pseudorandom function prf, three hash functions \(H_1:\{0,1\}^*\rightarrow \{0,1\}^n\), \(H_2:\{0,1\}^*\rightarrow \{0,1\}^{d\cdot k}\), \(H_3:\{0,1\}^*\rightarrow \{0,1\}^{d\cdot k+(\log _2 q)\cdot h}\), and a keyed pseudorandom function \(F:\{0,1\}^n\times \{0,1\}^n\rightarrow \{0,1\}^n\); Run \((sk,rpk,gp)\leftarrow \textsf {F-SPHINCS+.keyGen}(n,q,h)\), where (rpksk) is the F-SPHINCS+ key pair, \(gp=(n,q,h,d,k)\) are the hyper-tree parameters; Publish \(mpk=(gp, ~rpk, ~H_1,~H_2,~H_3,~F,~\textsf {prf})\) and keep \(msk=sk\) private. The issuer provides a non-interactive zero-knowledge (NIZK) proof \(\pi _{\mathcal {I}}\) to demonstrate that the key pair is generated correctly, meaning that the secret and public keys are associated with each other. This NIZK proof can be achieved by signing its own public key rpk using \(\textsf {F-SPHINCS+.sign}\), which is similar to the issuer creating a group membership credential in the joining protocol described below. In addition, the issuer initializes a group list \({\textbf {GL}}\), and each verifier initializes two revocation lists: a key revocation list keyRL and a link token revocation list linkRL. All these lists are empty when initialized.

  • DAA joining protocol Join(mskmpk): The joining protocol is run between a user (a TPM and its host) and the issuer. Note that this protocol involves the authentication of the TPM by the issuer. The issuer has an authentic copy of the TPM’s endorsement key, which is used to establish a secure and authenticated channel between the TPM and the issuer. In the following protocol description, it is assumed the existence of such a channel, and the reader is recommended to find the detail regarding how to establish such a channel from [25]. The protocol includes the following steps:

    1. 1.

      A unique session ID u is assigned to the user. For simplicity we can think the session ID as a monotonically increasing counter, and each invocation of the joining protocol will increase it by 1. Alternatively, the value u can be computed from the TPM’s endorsement key, which is unique to the TPM.

    2. 2.

      The TPM chooses a random secret key: \(sk_u {\mathop {\leftarrow }\limits ^{R}} \{0, 1\}^n\) as its signing key.

    3. 3.

      The host computes the group identifier \(gid = H_1(rpk)\) and sends it to its TPM.

    4. 4.

      The TPM then generates and returns its entry token \(et_u = F(sk_u, gid)\) together with the NIZK proof \(\pi _u\):

      $$\begin{aligned} \pi _u:\mathcal {P}\{(gp,~gid, ~et_u);(sk_u)| et_u = F(sk_u,~gid)\} \end{aligned}$$
    5. 5.

      The host then chooses a random string \(cr {\mathop {\leftarrow }\limits ^{R}} \{0,1\}^n\) and computes a commitment \(ct = H_1(et_u||cr)\). The host sends (uct) to the issuer to request joining the group.

    6. 6.

      Upon receiving (uct), the issuer checks whether an entry with the same u is in \({\textbf {GL}}\). If yes, rejects the user. Otherwise, if the issuer would like to accept the user, the issuer chooses a random string \(gr_u{\mathop {\leftarrow }\limits ^{R}}\{0,1\}^n\) and sends it to the host, who responds by sending \((et_u, cr, \pi _u)\) back. The issuer verifies \(ct=H_1(et_u||cr)\) and the NIZK proof \(\pi _u\). If both verifications pass, the issuer computes the group credential \((gr_u,{\textbf {S}})\leftarrow \textsf {F-SPHINCS+.sign}(et_u||gr_u,msk\), gp); otherwise the issuer rejects the user. The credential is sent to the TPM through the secure and authenticated channel between the TPM and issuer and then forwarded it to the host. The issuer adds \((u,~et_u,~gr_u,{\textbf {S}})\) to \({\textbf {GL}}\).

    7. 7.

      The user, if accepted by the issuer, sets its group membership secret key \(gsk_u=(sk_u, ~gr_u,~{\textbf {S}})\). More specifically, the TPM will record \(sk_u\) and the host will record the remaining values.

  • DAA signature generation DSig\((gsk_u,msg,bsn)\): To produce a DAA signature on a message msg and a basename bsn, the TPM and its host jointly create a DAA signature using \(gsk_u=(sk_u, \ gr_u, \ {\textbf {S}})\) as follows:

    1. 1.

      The host computes the link identifier \(lid = H_1(bsn)\), the signature identifier \(sid=H_1(msg||str)\), where \(str{\mathop {\leftarrow }\limits ^{R}}\{0,1\}^n\), and the group identifier \(gid = H_1(rpk)\), and sends these three identifier values to the TPM.

    2. 2.

      The TPM computes the group membership entry token \(et_u = F(sk_u, gid)\), the signature link token \(slt= F(sk_u, lid)\) and the signature signing token \(sst=F(sk_u,sid)\) together with the NIZK proof \(\pi _{\texttt{D}_{\texttt{T}}}\). The TPM then sends sst and \(\pi _{\texttt{D}_{\texttt{T}}}\) back to the host.

      $$\begin{aligned} \pi _{\texttt{D}_{\texttt{T}}}:&\mathcal {P}\{(gp,~sid,~gid,~lid,~slt,~hk,~cet_u); (sk_u,~sst,~et_u)| \\ {}&slt = F(sk_u,~lid) \wedge sst = F(sk_u,~sid) \wedge et_u = F(sk_u,~gid) \\ {}&\wedge hk = H_1(sst) \wedge cet_u=F(sst,~et_u)\} \end{aligned}$$

      Note that \(\pi _{\texttt{D}_{\texttt{T}}}\) proves that these three tokens are computed under the same \(sk_u\) and also provides a hook \((hk, cet_u)\), which allows the host to carry on proving the group credential for \(et_u\).

    3. 3.

      The host then computes \(mt_u||idx=H_3(et_u||gr_u)\) and \(com=H_1(sst||pk_h|| \cdots ||rpk)\}\), where \(pk_h,\cdots ,rpk\) are the public keys for verifying the signatures in \({\textbf {S}}\), from the layer h to layer 0 (the public key at the layer 0 is rpk). Here \(H_3(et_u||gr_u)\) is used as \(\textsf {F-SPHINCS+.mHash}(et_u,gr_u)\). The host also computes an NIZK proof \(\pi _{\texttt{D}_{\texttt{H}}}\):

      $$\begin{aligned} \pi _{\texttt{D}_{\texttt{H}}}:&\mathcal {P}\{(gp, ~rpk, ~slt, ~com, ~hk, ~cet_u); (et_u, ~sst,~gr_u,~{\textbf {S}}=\{\sigma _h,\cdots , \sigma _0\})| \\&hk = H_1(sst) \wedge cet_u = F(sst,~et_u) \wedge mt_u||idx=H_3(et_u|| gr_u) \\&\wedge pk_h=\textsf {recoverPK}(\sigma _h,mt_u,(n,d,k,(h,idx)))\\&\wedge pk_{h-1}= \textsf {recoverPK}(\sigma _{h-1},pk_h,(n,d,k,(h-1,\lfloor \frac{idx}{q}\rfloor )))\wedge \cdots \\&\wedge rpk=\textsf {recoverPK}(\sigma _{0},pk_1,(n,d,k,(0,0)))\\&\wedge com=H_1(sst||pk_h||\cdots ||rpk) \} \end{aligned}$$
    4. 4.

      The signature \(\varSigma = (str, ~slt, ~com, ~\pi _\texttt{D})\), where \(\pi _\texttt{D}\) is the combination of \(\pi _{\texttt{D}_{\texttt{T}}}\) and \(\pi _{\texttt{D}_{\texttt{H}}}\), i.e., \(\pi _\texttt{D} = (\pi _{\texttt{D}_{\texttt{T}}}, \pi _{\texttt{D}_{\texttt{H}}})\). hk and \(cet_u\) appearing in both \(\pi _{\texttt{D}_{\texttt{T}}}\) and \(\pi _{\texttt{D}_{\texttt{H}}}\) play the role that glues these two MPCitH instances together. From a verifier’s point of view, \(\pi _\texttt{D}\) produces the following NIZK proof:

      $$\begin{aligned} \pi _\texttt{D}:&\mathcal {P}\{(gp, ~rpk,~gid, ~sid,~lid,~slt, ~com); \\&(sk_u,et_u, ~sst,~gr_u,~{\textbf {S}}=\{\sigma _h,\cdots , \sigma _0\})| \\&slt=F(sk_u,~lid) \wedge sst=F(sk_u,~sid)\wedge et_u=F(sk_u,~gid) \\&\wedge mt_u||idx=H_3(et_u|| gr_u) \\&\wedge pk_h=\textsf {recoverPK}(\sigma _h,mt_u,(n,d,k,(h,idx)))\\&\wedge pk_{h-1}= \textsf {recoverPK}(\sigma _{h-1},pk_h,(n,d,k,(h-1,\lfloor \frac{idx}{q}\rfloor )))\wedge \cdots \\&\wedge rpk=\textsf {recoverPK}(\sigma _{0},pk_1,(n,d,k,(0,0)))\\&\wedge com=H_1(sst||pk_h||\cdots ||rpk) \} \end{aligned}$$

      More details of \(\pi _\texttt{D}\) will follow in Sect. 3.3.

  • DAA signature verification \(\textsf {DVf}(msg\), bsn, \(\varSigma \), \({\textbf {keyRL}}\), \({\textbf {linkRL}}):\) Given \(\varSigma =(str, slt, com, \pi _\texttt{D}), msg, bsn\), together with two revocation lists keyRL and linkRL, the verifier first rejects \(\varSigma \) if \((bsn, slt) \in {\textbf {linkRL}}\). Otherwise, the verifier recomputes \(lid=H_1(bsn)\), and \(\forall {sk_u^*} \in {\textbf {keyRL}}\) computes \(slt^* = F(sk_u^*, lid)\). If any \(slt^* = slt\), rejects \(\varSigma \). Otherwise, the verifier verifies \(\pi _\texttt{D}\). Accept if the verification succeeds; otherwise reject.

  • DAA link algorithm \(\textsf {Link}(\varSigma , \varSigma '):\) Given two valid DAA signatures \(\varSigma =(str, slt, com, \pi _\texttt{D})\) and \(\varSigma ' =(str', slt', com', \pi '_\texttt{D})\) associated with the same bsn, the verifier checks if \(slt = slt'\) holds. If so output linked, otherwise not linked.

  • DAA revocation There are two cases to revoke the group membership of the user u: (1) Given \(sk_u\), a verifier adds it in keyRLFootnote 1; (2) Given a pair (bsnslt) associated with a DAA signature signed by the user u to be revoked, a verifier adds this pair in linkRL.

3.3 The Proof \(\pi _\texttt{D}\)

The most important part in the DAA signature \(\varSigma = (str, ~slt, ~com, ~\pi _\texttt{D} = (\pi _{\texttt{D}_{\texttt{T}}}, \pi _{\texttt{D}_{\texttt{H}}}))\) is the proof \(\pi _\texttt{D}\). In this section we dissect it to show the design rationale and explain two changes we made to MPC-in-the-Head, which greatly improves the efficiency and may be of independent interest.

As \(\varSigma \) is a signature of a message msg, the foremost thing \(\pi _\texttt{D}\) needs to prove is that the signer knows a group signing key \(gsk_u=( ~sk_u,~gr_u, ~{\textbf {S}})\) and it was used to sign msg. Besides that, \(\pi _\texttt{D}\) also needs to prove that \(gsk_u\) is authorized by the issuer. To do that, in \(\pi _\texttt{D}\) the following is done:

  1. 1.

    It proves that the same signing key \(sk_u\) is used to generate three values \(et_u\), slt and sst, where \(et_u\) is bound with the group root public key rpk (as it is computed from \(gid = H_1(rpk)\)), slt is bound with the base name bsn (as it is computed from \(lid = H_1(bsn)\)), and sst is bound with the message msg and random string str (as it is computed from \(sid=H_1(msg||str)\)). slt is revealed in \(\varSigma \), and \(et_u\) and sst are hidden.

  2. 2.

    It proves that two revealed values slt and com are produced using the same \(sk_u\). It binds com to sst (by using sst in computing com). The commitment com also binds \(\varSigma \) to all public keys used to blindly verify the signatures in \({\textbf {S}}\).

  3. 3.

    It proves that \(mt_u\), which is computed from \(et_u\), is signed under a private key in a leaf node of the hyper-tree generated by the group issuer. This is done by verifying all the signatures in \({\textbf {S}}\) such that \(mt_u\) and \(\sigma _h\) produce the leaf public key \(pk_h\), which in turn with \(\sigma _{h-1}\) produces \(pk_{h-1}\), and so on until reaching the root. The last public key produced is rpk which is published by the group issuer. All public keys recovered in this process match those committed in the commitment com.

The first challenge for implementing \(\pi _\texttt{D}\) with MPCitH comes from splitting the signer role into two parts, the principal signer TPM and the assistant signer host, where the TPM holds \(sk_u\) and the host holds \({\textbf {S}}\). A straightforward choice is to let the TPM and host be involved in the same MPCitH instance. This will result in a large communication cost between these two entities. Our solution is to split \(\pi _\texttt{D}\) into two MPCinH instances, \(\pi _{\texttt{D}_{\texttt{T}}}\) and \(\pi _{\texttt{D}_{\texttt{H}}}\), each is performed by one entity. The difficulty now is how to glue these two instances together seamlessly in a zero-knowledge manner. We let \((sst, et_u)\) serve as a hidden hook and \(hk = H_1(sst)\) and \(cet_u = F(sst, et_u)\) as a commitment of sst and \(et_u\). Both \(\pi _{\texttt{D}_{\texttt{T}}}\) and \(\pi _{\texttt{D}_{\texttt{H}}}\) include the same MPCitH proofs of hk and \(cet_u\). The collision-resistance property of the functions F and \(H_1\) guarantees that the same pair of \((sst, et_u)\) are in \(\pi _{\texttt{D}_{\texttt{T}}}\) and \(\pi _{\texttt{D}_{\texttt{H}}}\). The preimage resistance property of these two functions guarantees that neither \(et_u\) nor sst is revealed. The MPC instance of \(\pi _{\texttt{D}_{\texttt{T}}}\) is shown in MPCitH 1.

figure a

Let us first introduce the notation used in such an MPCitH algorithm: \(\llbracket x \rrbracket \) means that the value x is secret-shared when using an MPC algorithm, meaning that it is known by the prover but not the verifier. MPC_X means the MPC subroutine implementing the function X (e.g. MPC_F, MPC_H1, MPC_H2 and MPC_H3 implement F, \(H_1\), \(H_2\) and \(H_3\)). This notation will be used throughout the paper. Based on [41], in an implementation \(\textsf {MPC\_F}\) can be used as a building block for the hash functions that we need.

In MPCitH 1, the TPM performs the \(\textsf {MPC\_F}\) algorithm four times and the \(\textsf {MPC\_H1}\) algorithm once when computing the signature link token \(slt = \textsf {MPC\_F}(\llbracket sk_u \rrbracket \), lid), the signature signing token \(\llbracket sst \rrbracket = \textsf {MPC\_F}(\llbracket sk_u \rrbracket , sid)\), the entry token \(\llbracket et_u \rrbracket = \textsf {MPC\_F}(\llbracket sk_u \rrbracket , gid)\), the hash value \(hk = \textsf {MPC\_H1}(\llbracket sst \rrbracket )\), and the connection entry token \(cet_u = \textsf {MPC\_F}(\llbracket sst \rrbracket \), \(\llbracket et_u \rrbracket )\). These five operations are performed in the same MPCinH knowledge-proof routine, where \(sk_u\), sst and \(et_u\) are kept secret. The TPM outputs the proof along with \(slt'\), hk, and \(cet_u\). The proof demonstrates that the same \(sk_u\) value was used in steps 1) - 3), and steps 4) and 5) are used to pass sst and \(et_u\) to the host, which allows the latter to carry on the MPCinH knowledge-proof \(\pi _{\texttt{D}_{\texttt{H}}}\) for the DAA credential associated with \(et_u\). In an implementation this reduces to 5 calls to \(\textsf {MPC\_F}\).

The second challenge for implementing \(\pi _\texttt{D}\) with MPCitH comes from the cost of \(h+1\) M-FORS signature verifications required by the proof in \(\pi _{\texttt{D}_{\texttt{H}}}\). Recall that in an M-FORS signature (Sect. 3.1, also the example in Fig. 2), the message hash to be signed is broken into k blocks, and each block is authenticated with a Merkle-tree of height d. Then the k Merkle tree roots are organized into a new Merkle tree whose root is the public key. Verifying the full signature means to check whether the public key can be recovered from the message hash, the secret strings corresponding to the hash blocks \(({\textbf {x}}^{(i)}_{p_i})\), and the hashes along the Merkle tree authentication paths. In total, to verify a single M-FORS signature, \(k\cdot (d+1)+(k-1)=kd+2k-1\) hashes are needed, which is in the order of \(10^2\) for a practical setting (with an extra factor of 2 if implementing with \(\textsf {MPC\_F}\)). The \(h+1\) factor means that if implemented naively, the MPC used in \(\pi _{\texttt{D}_{\texttt{H}}}\) would need to call thousands of times the sub-procedure that implements the hash function, and the size of the circuit for the whole MPC can go easily above a million-gates. Even worse, to reduce the soundness error, the same circuit needs to be executed tens to hundreds of times in an MPCitH proof. Thus, a naive implementation of \(\pi _{\texttt{D}_{\texttt{H}}}\) will result in a very large signature size and a high computational cost.

Fig. 3.
figure 3

M-FORS Patial Verification.

Our more efficient strategy for implementing \(\pi _{\texttt{D}_{\texttt{H}}}\) is: in MPCitH, rather than repeating t times a MPC procedure in which the M-FORS signatures are fully verified, we run \(t'\ge k\) MPC procedures in which the M-FORS signatures are partially verified, one block in each run (see the example of partial verification in Fig. 3). More precisely, we extend the M-FORS with the following algorithms:

  • \(\textsf {partial}\hbox {-}\textsf {sig}(\sigma ,MD,i,mp)\): to extract a partial signature of the i-th block of MD from \(\sigma =\{(x_0,{\textbf {authpath}}^{(0)}), \cdots , (x_{k-1},{\textbf {authpath}}^{(k-1)})\}\). The Merkle tree \(\texttt {mt}_{k}\) can be recomputed from \(\sigma \). The partial signature is \(\partial _{\sigma ,i}=(x_{i}, {\textbf {authpath}}^{(i)}\), \({\textbf {authpath}}^{(k,i)})\) where \((x_{i}, {\textbf {authpath}}^{(i)})\) is a copy of the i-th pair in \(\sigma \), and \({\textbf {authpath}}^{(k,i)}\) is the authentication path of \(r_i\) (the root of the i-th Merkle tree) in \(\texttt {mt}_{k}\).

  • \(\textsf {partial}\hbox {-}\textsf {rec}(\partial _{\sigma ,i},p_i,i,mp)\): This algorithm recovers the public key from \(\partial _{\sigma ,i}\) and \(p_i\). Given \(\partial _{\sigma ,i}= (x, {\textbf {authpath}}, {\textbf {authpath}}')\), first compute the Merkle tree root \(r_i\) from \((x,{\textbf {authpath}},p_i)\), then compute the Merkle tree root pk from \((r_i,{\textbf {authpath}}',i)\). Output pk.

With partial-rec, only one path is used to recover the M-FORS public key instead of k paths.

The MPC procedure for proving the v-th block in \(\pi _{\texttt{D}_{\texttt{H}}}\) is shown in MPCitH 2. The first 2 steps of this algorithm are the same as steps 4) and 5) in MPCitH 1. This duplication can glue the TPM part \(\pi _{\texttt{D}_{\texttt{T}}}\) and the host part \(\pi _{\texttt{D}_{\texttt{H}}}\) together.

figure b

The host uses partial signatures in the MPC. Recall that in the group signing key \(gsk_u\), a list \({\textbf {S}}=\{\boldsymbol{\sigma }_h,\cdots , \boldsymbol{\sigma }_0\}\) of \(h+1\) signatures are stored, one for each layer in the hyper-tree of F-SPHINCS+. The signer can extract a partial signature for the v-th block from each signature, i.e. \(\{\partial _{\sigma _h,v}, ~\cdots , ~\partial _{\sigma _0,v}\}\). In Line 8, an MPC subroutine MPC_pRec that implements partial-rec is used. This subroutine uses the input to compute the corresponding public key at the l-th layer in the hyper-tree (stored in \(\llbracket M \rrbracket \) and also appended to \(\llbracket COM \rrbracket \)). After the last iteration, \(\llbracket COM \rrbracket \) is hashed and \(\llbracket M \rrbracket \) is revealed. The results will be checked by the verifier to see whether they match com and rpk. If so, the signer is likely to possess valid partial signatures along the path from the idx-th leaf node to the root node in the hyper-tree.

Why does this strategy make sense? In an MPCitH proof, the same procedure is run multiple times. Each run has a soundness \(\epsilon \) that a cheating prover can get away without being detected. Thus t runs are needed so that \(\epsilon ^t\) is negligibly small. In our case, the main cost of the MPC procedure comes from verifying all the M-FORS signatures. The full verification requires every block of the message digest or the child public key to be verified. Our observation is that if a prover has to cheat, then it has to cheat in more than 1 blocks with a high probability. If the prover has to cheat in n out of k blocks, then using partial verification with \(t'\), such that \(t'\cdot n/k\ge t\), ensures that the prover has to cheat in more than t runs, and hence with a negligible success probability. As we analyzed, an implementation with full signature verification requires \(t\cdot (h+1)\cdot (k\cdot d+2k-1)\) calls to the MPC hash procedure. The partial verification based implementation, on the other hand, requires only \(t'\cdot (h+1)\cdot (d+1+\lceil \log k\rceil )\) MPC hash calls. The improvement is roughly \(\frac{tk}{t'}\) times.

The soundness analysis of \(\pi _D\) is given in Sect. 5.

4 Security Analysis of F-SPHINCS+

The standard security definition for digital signature schemes is existential unforgeability under adaptive chosen-message attacks (EU-CMA). It can be extended to few-time signature by limiting the adversary’s call to the sign oracle to \(q_s\) times where \(q_s\) is the maximum number of signatures that the few-time signature scheme is allowed to generate for each signing key. Let \(SIG=(kg,sign,vf)\) be a \(q_s\)-time signature scheme, Fig. 4 shows the \(q_s\)-EU-CMA game.

Fig. 4.
figure 4

\(q_s\)-EU-CMA game.

Definition 1

( \(q_s\)-EU-CMA). Let SIG be a digital signature scheme. It is said to be \(q_s\)-EU-CMA secure, if for any adversary A, the following holds:

$$\begin{aligned}&\text {Succ}_{SIG}^{q_s\text {-EU-CMA}}(A(n))=\text {Pr}\left[ {\text {Exp}}_{\text {SIG},A}^{q_s\text {-EU-CMA}}(n)=1\right] \le negl(n) \end{aligned}$$

Theorem 1

Following the definitions of SM-TCR (single function, multi-target-collision resistance), SM-DSPR (single function, multi-target decisional second-preimage resistance), TSR (target subset resilience), and ITSR (interleaved target subset resilience) given in [3], for suitable parameters, ndkhq, the F-SPHINCS+ signature is \(q^h\)-EU-CMA secure if:

  • \(H_1\) is SM-TCR and SM-DSPR secure;

  • \(H_2\) is TSR secure with at most q queries;

  • \(H_3\) is ITSR secure with at most \(q^h\) queries;

  • \(\textsf {prf}\) is a secure pseudorandom function.

Proof

To successfully forge an issuer’s signature on a message M chosen by the adversary, there are the following mutually exclusive cases:

  1. 1

    Let \(MD||idx=H_3(M||gr)\) for some gr. In the forged signature, all secret strings corresponding to \(MD=p_0||\cdots ||p_{k-1}\), i.e. \(\{{\textbf {x}}^{(i)}_{p_i}\}_{i=0}^{k-1}\), are the same as generated from \(\texttt {leaf}_{idx}\)’s secret key. This case consists of the following sub-cases:

    1. 1.1

      The adversary learns all secret strings from signatures obtained in the query phase.

    2. 1.2

      Some secret strings are not leaked from previous signatures, and for each of them, the adversary either:

      1. 1.2.1

        learns it by breaking the pseudorandom function that is used to expand the secret key into \({\textbf {x}}_i\);

      2. 1.2.2

        or learns it by looking at their \(H_1\) hash values and find the pre-images.

  2. 2

    Let \(MD||idx=H_3(M||gr)\) for some gr. In the forged signature, some secret strings corresponding to \(MD=p_0||\cdots ||p_{k-1}\), i.e. \(\{{\textbf {x}}^{(i)}_{p_i}\}_{i=0}^{k-1}\), are NOT the same as generated from \(\texttt {leaf}_{idx}\)’s secret key. Then let S be the list of \(h+1\) M-FORS signatures in the forged signature, we can find i such that when verifying the i-th signature \((0\le i\le h)\), we obtain the same public key as would be generated by the signer, but for all \(0\le j<i\), we obtain a different public key as would be generated by the signer. This means:

    1. 2.1

      The adversary has found at least one second-preimages of \(H_1\) so that some Merkle trees in the ith signature are computed with the second-preimages. They end up having the same roots as the trees computed by the issuer.

    2. 2.2

      The adversary knows all secret strings corresponding to the public key produced from verifying the \((i-1)\)th signature. This public key is different from the public key at the same location generated by the issuer. This can be done by either:

      1. 2.2.1

        learning all from previous signature queries;

      2. 2.2.2

        or breaking the pseudoranodm function;

      3. 2.2.3

        or finding some pre-images of \(H_1\).

Given the above, we analyze the F-SPHINCS+ signature scheme through a series of games:

Game 0: The original EU-CMA game in which the adversary needs to forge a valid issuer’s signature after \(q_s\) queries.

Game 1: Exactly as Game 0 except all output of prf are replaced by truly random n-bit strings. We eliminate from the above list Case 1.2.1 and 2.2.2 by this modification. Since each call to prf uses a secret key and a distinct value as input, assuming prf is a pseudorandom function, we have:

$$ |\text {Succ}^{Game0}(A(n)) - \text {Succ}^{Game1}(A(n))|\le negl(n) $$

Game 2: Game 2 differs from Game 1 in that we consider the adversary lost if the adversary outputs a forgery by breaking the ITSR security of \(H_3\). This modification eliminates from the above list Case 1.1. The winning condition in Fig. 4 is changed to:

  • Return 1 iff \(ITSR(H_3,M^*)=0\wedge vf(pk,M^*,\sigma ^*)=1 \wedge M^*\not \in \{M_i\}_{i=1}^{q^h}\).

The predicate ITSR is defined as the following:

  • Let \(M^*\) be the message that the adversary chooses to generate the forgery on, and \(gr^*\) the random string used by the adversary to compute \(MD^*||idx^*=H_3(M^*||gr^*)\).

  • Parse \(MD^*=p_0^*||\cdots ||p_{k-1}^*\) where each \(p_j^*\in [0,2^d-1]\). From the above we obtain a set \(C^*=((idx^*,0,p_0^*),\cdots ,(idx^*,k-1,p_{k-1}^*))\).

  • For each message queried in the query phase \(M_i\) (\(1\le i\le q^h\)), and \(gr_i\) the random string, compute \(MD_i||idx_i=H_3(M_i||gr_i)\) and obtain \(C_i=((idx_i,0,p_{i,0}),\cdots ,(idx_i,k-1,p_{i,k-1}))\).

  • Return 1 iff \(C^*\subseteq \bigcup _{i=1}^{q^h}C_i\).

We can see that \(ITSR(H_3,M^*)=0\) iff the adversary can break the ITSR security of \(H_3\). Hence, we have:

$$ |\text {Succ}^{Game1}(A(n)) - \text {Succ}^{Game2}(A(n))|\le \text {Succ}_{H_3,q^h}^{ITSR}(A)\le negl(n) $$

Game 3: Game 3 differs from Game 2 in that we consider the adversary lost if the forgery contains a second preimage for an input to \(H_1\) that was part of a signature returned as a signing-query response. Here the second preimage can be included explicitly in the signature, or implicitly observed when verifying the signature. This eliminates from the above list Case 2.1. Then we have:

$$ |\text {Succ}^{{Game2}}(A(n)) - \text {Succ}^{{Game3}}(A(n))|\le \text {Succ}_{H_1,q}^{SM-TCR}(A)\le negl(n) $$

Game 4: Game 4 differs from Game 3 in that we consider the adversary lost if the adversary outputs a forgery by breaking the TSR security of \(H_2\), which allows the adversary to forge an intermediate signature in \({\textbf {S}}\), and then any signature earlier in the chain. This eliminates from the above list Case 2.2.1. The winning condition in Fig. 4 is changed to:

  • Return 1 iff \(TSR(H_2,M^*)=0\wedge ITSR(H_3,M^*)=0\wedge vf(pk,M^*,\sigma ^*)=1 \wedge M^*\not \in \{M_i\}_{i=1}^{q^h}\).

The predicate TSR is defined as the following:

  • The adversary chooses an intermediate node in the hyper-tree at address (ab), and two n-bit string \(L^*,R^*\).

  • For each signature obtained in the query phase, if \({\textbf {S}}_i\) includes a signature generated using the secret key in node (ab) over the public key in one of its child node, parse this public key into k blocks, each of d-bit \(pk_i=p_{i,0}||\cdots ||p_{i,k-1}\), and generate a set \(C_i=\{( j,p_{i,j})\}_{j=0}^{k-1}\).

  • Compute \(pk^*=H_2(\texttt {aux}||k||0||0||L^*||R^*)\), parse \(pk^*\) into \(p_{0}^*||\cdots ||p_{k-1}^*\), and generate a set \(C^*=\{( j,p_{j}^*)\}_{j=0}^{k-1}\).

  • Return 1 iff \(C^*\subseteq \bigcup _{i=1}^{q}C_i\).

Note that each M-FORS public key is the root of a Merkle tree generated from pseudorandom strings. Also for each intermediate node in a hyper-tree, it has at most q children, hence no more than q signatures signed by the secret key in this intermediate node can be obtained by the adversary. So \(TSR(H_2,M^*)=0\) iff the adversary can break the TSR security of \(H_2\). Hence, we have:

$$ |\text {Succ}^{Game3}(A(n)) - \text {Succ}^{Game4}(A(n))|\le \text {Succ}_{H_2,q}^{TSR}(A)\le negl(n) $$

Now the cases in which the adversary can forge a signature are all eliminated except Case 1.2.2 and 2.2.3, which requires the adversary to find a pre-image of at least one hash value produced by \(H_1\). The success probability of finding a pre-image is as analyzed in [3]:

$$\begin{aligned} Succ^{Game4}(A)&\le 3\cdot Succ_{H_1,p}^{SM-TCR}(A)+\text {Adv}_{H_1,p}^{SM-DSPR}(A) \le negl(n) \end{aligned}$$

So overall, the advantage of the adversary is negligible.

TSR Security of \(H_2\). In any case, q signatures can be generated under the secret key of a non-leaf node in the hyper-tree. Assuming the adversary knows all of them, then for each block of the chosen \(pk^*\), the probability of the secret string has been leaked is \(1-(1-\frac{1}{2^d})^q\), so all secret string have been leaked is \((1-(1-\frac{1}{2^d})^q)^k\). For \(d={16},q=1024,k=68\), this probability is \(2^{-468.87}\), if \(k=35\), this probability is \(2^{-210.39}\).

ITSR Security of \(H_3\). For a leaf node of the hyper-tree, it may have been used to sign \(\gamma \) signatures out of the total \(q_s\) signature queries. So the probability that all secret string of a chosen message M being leaked through query is:

$$ \sum _\gamma (1-(1-\frac{1}{2^d})^\gamma )^k{\left( {\begin{array}{c}q_s\\ \gamma \end{array}}\right) }(1-\frac{1}{q^h})^{q_s-\gamma }\frac{1}{q^{h\gamma }} $$

For \(d={16},q=1024,k=68, h=6, q_s=2^{60}\), this probability is \(2^{-407.32}\), if \(k=35\), this probability is \(2^{-208.95}\).

5 Soundness Analysis of \(\pi _\texttt{D}\)

In \(\pi _\texttt{D}\), k instances of MPC are run. In the ith instance, the partial verification procedure is used to verify every M-FORS signature in S, but only the i-th block of the hash value being signed. Out of the k blocks, the adversary may have learned the secret strings correspond to \(\lambda _1\) blocks through queries, and has to cheat in all the remaining \(k-\lambda _1\) blocks. For each MPC instance, the verifier opens the views of a subset of the MPC parties and a cheat prover can be detected with a probability \(1-\epsilon \). Therefore, if using an MPC protocol without pre-processing, then the soundness error is;

$$ \sum _{i=0}^{k} \text {Pr}[\lambda _1=i] \cdot \epsilon ^{k-i} $$

If using an MPC protocol with pre-processing, then the adversary can also cheat in the pre-processing phase. If the adversary cheats in \(\lambda _2\) (out of M) copies of pre-processing data, and not being detected when checking the pre-processing data (the probability is denoted as \(\text {Succ}^{pre}(\lambda _2,k,M)\)), then it needs to cheat in \(k-\lambda _1-\lambda _2\) MPC instances. The soundness error is:

$$ \sum _{i=0}^{k} \text {Pr}[\lambda _1=i]\left( \sum _{\lambda _2=0}^{k-\lambda _1} \text {Succ}^{pre}(\lambda _2,k,M)\cdot \epsilon ^{k-\lambda _1-\lambda _2}\right) $$

As a concrete example, let us consider a case in which we implement \(\pi _\texttt{D}\) using KKW [45]. Then we have:

$$ \text {Pr}[\lambda _1=i]={\left( {\begin{array}{c}k\\ i\end{array}}\right) }(1-(1-2^{-d})^q)^i((1-2^{-d})^q)^{k-i}, $$
$$ \text {Succ}^{pre}(\lambda _2,k,M)=\frac{{\left( {\begin{array}{c}M-\lambda _2\\ M-k\end{array}}\right) }}{{\left( {\begin{array}{c}M\\ M-k\end{array}}\right) }}, ~~~ \epsilon =\frac{1}{N} $$

In the above, dkq are the parameters for the M-FORS signature, M is the number of pre-processing data generated, and N is the number of MPC parties. When \(d=16, k=70,q=1024,M=1120\), and \(N=16\), then the soundness error is \(2^{-257.769}\); when \(d=16, k=35,q=1024,M=560\), and \(N=16\), then the soundness error is \(2^{-128.987}\).

6 UC Security Model for DAA

Security in the Universal Composability (UC) framework follows the simulation-based paradigm, where a protocol is secure when it is as secure as an ideal functionality that performs the desired tasks in a way that is secure by design. In this framework, an environment \(\mathcal {E}\) passes inputs and outputs to the protocol parties. The network is controlled by an adversary \(\mathcal {A}\) that may communicate freely with \(\mathcal {E}\). The framework includes an ideal world and a real world. In the ideal world, the parties forward their inputs to the ideal functionality \(\mathcal {F}\), which then (internally) performs the defined task and creates outputs that are forwarded to \(\mathcal {E}\) by the parties. A real-world protocol \(\varPi \) is said to securely realize a functionality \(\mathcal {F}\), if the real world is indistinguishable from the ideal world, meaning that for every adversary performing an attack in the real world, there is an ideal world adversary (often called simulator) \(\mathcal {S}\) that performs the same attack in the ideal world. More precisely, a protocol \(\varPi \) is secure if for every adversary \(\mathcal {A}\), there exists a simulator \(\mathcal {S}\) such that no environment \(\mathcal {E}\) can distinguish executing the real world with \(\varPi \) and \(\mathcal {A}\), and executing the ideal world with \(\mathcal {F}\) and \(\mathcal {S}\). Another key point of UC, towards reducing the computational complexity of the specified protocol, is the composition theorem: It guarantees composition with arbitrary sets of parties and executed computational tasks. This ensures that UC-security proofs, for any subroutine of \(\mathcal {F}\), are also transferred to the security model of the entire protocol \(\varPi \).

Fig. 5.
figure 5

UC security model for DAA

Now we employ the UC model for the security of our DAA protocol \(\varPi \). Figure 5 depicts the network topology of the real and ideal worlds. The endmost goal is to prove the completeness and soundness of the DAA protocol by proving that an adversary cannot gain any significant advantage when monitoring the operations and interacting tasks that take place in the real world; i.e., be indistinguishable from the case where all the DAA internal phases are executed in the ideal world. Security of our DAA protocol \(\varPi \) is captured by the fact that every attack \(\mathcal {A}\) mounted in the real world, \(\mathcal {S}\) carries out in the ideal world. Protocol security is implied since such attacks cannot be mounted in the ideal world. We have then that the output \(\mathcal {E}\) retrieved from the execution of \(\varPi \) in the ideal world with \(\mathcal {S}\) and from the execution of \(\varPi \) with the real-world entities and \(\mathcal {A}\) are indistinguishably distributed. This ensures that a real-world DAA protocol \(\varPi \) securely realizes all internal cryptographic tasks (e.g., JOIN, SIGN, VERIFY, and LINK) if for any real-world adversary \(\mathcal {A}\) that interacts with the DAA players, running \(\varPi \), there exists an ideal world simulator \(\mathcal {S}\) that interacts with the ideal functionality \(\mathcal {F}\), and the notional entities executing DAA protocol so that no probabilistic polynomial time environment \(\mathcal {E}\) can distinguish whether it is interacting with the real world adversary \({\mathcal {A}}\) or the ideal world adversary \(\mathcal {S}\).

We follow the UC security model for DAA given by Camenisch et al. in [16], where the ideal functionality \(\mathcal {F}\) assumes static corruptions, i.e., the adversary decides upfront which parties are corrupt and makes this information known to the functionality. The UC framework allows us to focus the analysis on a single protocol instance with a globally unique session identifier sid. \( \mathcal {F}\) uses session identifiers of the form \( sid = (\mathcal {I}, sid')\) for some issuer \(\mathcal {I}\) and a unique string \( sid '\).

The ideal functionality \( \mathcal {F}\) is further parametrized by a leakage function \(l: \{0,1\}^* \rightarrow \{0,1\}^*\), that models the information leakage occurred in the communication between a host \(\mathcal {H}_j\) and its TPM \(\mathcal {M}_j\). We define \(\mathcal {F}\) by using two “macros” to determine if a TPM’s signing key \(sk_u\) is consistent with the internal functionality records or not. This is checked at several places in the functionality and also depends on whether the \(sk_u\) belongs to an honest or corrupt TPM. The first macro \(\textsf{CheckTtdHonest}\) is used when the functionality stores a new TPM key \(sk_u\) that belongs to an honest TPM, and checks that none of the existing valid signatures is identified as belonging to this TPM key. The second macro \(\textsf{CheckTtdCorrupt}\) is used when storing a new \(sk_u\) that belongs to a corrupt TPM, and checks that the new \(sk_u\) does not break the identifiability of signatures, i.e., it checks that there is no other known TPM key \(sk'_u\), unequal to \(sk_u\), such that both keys are identified as the owner of a signature. Both functions output a bit b where \(b=1\) indicates that the new \(sk_u\) is consistent with the stored information, whereas \(b=0\) signals an invalid key. We also define the \(\textsf{JOIN}\) and \(\textsf{SIGN}\) sub-sessions by jsid and ssid. In addition \(\mathcal {F}\) maintains a group member list \(\texttt {ML} \), a key record list \(\texttt {DomainKeys} \), a signature record list \({\texttt {Signed} }\), and a verification result list VerResults.

We adopt two sub-functionalities introduced in [16] and they are available to all parties. The first one is a certificate authority functionality \(\mathcal {F}_{ca}\) that allows the issuer to register their public key. The second is the common reference string functionality \(\mathcal {F}_{crs}\), which is used to provide all entities with the system parameters comprising the random seed to generate the commitments and the issuer’s public key. Note that for the communication between the TPM and issuer (via the host) in the join protocol we adopt the key binding protocol introduced in [25] that provides a secure and authenticated channel between the TPM and the issuer even in the presence of a corrupt host, therefore no need for the semi-authenticated channel \(\mathcal {F}^{*}_{auth}\) in our model. We now define the algorithms that will be used inside the ideal functionality as follows:

  • ukgen(n): A probabilistic algorithm that takes a security parameter n as input and generates a key \(sk_u\) for a honest TPM.

  • sign \((sk_u,msg,bsn)\): A probabilistic algorithm used by a honest TPM; input is a key \(sk_u\), a message msg and a basename bsn, and output is a signature \(\varSigma \).

  • verify\(( \varSigma , msg, bsn)\): A deterministic algorithm that is used in the \(\textsf{VERIFY}\) interface. On input of a signature \(\varSigma \), a message msg and a basename bsn, it outputs \(f=1\) if the signature is valid, and \(f=0\) otherwise.

  • link\((\varSigma _{1}, msg_{1}, \varSigma _{2}, msg_{2},bsn)\): A deterministic algorithm that is used in the \(\textsf{LINK}\) interface. Given two signatures with the same bsn, it outputs 1 if both \(\varSigma _1\) and \(\varSigma _2\) were generated by the same TPM, and outputs 0 otherwise.

  • identify\((sk_u,\varSigma , msg,bsn)\): A deterministic algorithm that is used to ensure consistency with the ideal functionality \(\mathcal {F}\)’s internal records. It outputs 1 if a key \(sk_u\) was used to produce a signature \(\varSigma \), and outputs 0 otherwise.

We explain the interfaces of the ideal functionality \( \mathcal {F}\) in the UC framework:

Setup

  1. 1.

    Issuer Setup. On input \((\textsf{SETUP}, sid )\) from issuer \(\mathcal {I}\),

    • Verify that \( sid = (\mathcal {I}, sid')\) and output \((\textsf{SETUP}, sid )\) to \(\mathcal {S}\).

  2. 2.

    Set Algorithms. On input \((\textsf{ALG}\), \( sid \), \({\textsf{ukgen}}\), \(\textsf{sign}\), \({\textsf{verify}}\), \({\textsf{link}}\), \({\textsf{identify}})\) from \(\mathcal {S}\),

    • Check that \({\textsf{verify}}\), \({\textsf{link}}\) and \({\textsf{identify}}\) are deterministic (i).

    • Store \(( sid , {\textsf{ukgen}}, \textsf{sign}, {\textsf{verify}}, {\textsf{link}}, {\textsf{identify}})\) and output \((\textsf{SETUPDONE}, sid )\) to \(\mathcal {I}\).

Join

  1. 3.

    Join Request. On input \((\textsf{JOIN}, sid , jsid, \mathcal {M}_j)\) from host \(\mathcal {H}_j\),

    • Create a join session record \(\langle jsid, \mathcal {M}_j, \mathcal {H}_j, status \rangle \) with \( status \leftarrow request \).

    • Output \((\textsf{JOINSTART}, sid , jsid, \mathcal {M}_j, \mathcal {H}_j)\) to \(\mathcal {S}\).

  2. 4.

    Join Request Delivery. On input \((\textsf{JOINSTART}\), \( sid \), jsid) from \(\mathcal {S}\),

    • Update the session record \(\langle jsid, \mathcal {M}_j, \mathcal {H}_j, status \rangle \) to \( status \leftarrow delivered \).

    • Output \((\textsf{JOINPROCEED}, sid , jsid, \mathcal {M}_j)\) to \(\mathcal {I}\).

  3. 5.

    Join Proceed. On input \((\textsf{JOINPROCEED}, sid , jsid)\) from \(\mathcal {I}\),

    • Update the session record \(\langle jsid, \mathcal {M}_j, \mathcal {H}_j, status \rangle \) to \( status \leftarrow complete \).

    • Output \((\textsf{JOINCOMPLETE}, sid , jsid)\) to \(\mathcal {S}\).

  4. 6.

    Platform Key Generation. On input \((\textsf{JOINCOMPLETE}\), \( sid \), jsid, \(sk_{j})\) from \(\mathcal {S}\),

    • Look up record \(\langle jsid, \mathcal {M}_j, \mathcal {H}_j, status \rangle \) with \( status = complete \).

    • Abort if \(\mathcal {I}\) or \(\mathcal {M}_j\) is honest and a record \(\langle \mathcal {M}_j, *, *\rangle \in \texttt {ML} \) already exists (ii).

    • If \(\mathcal {M}_j\) and \(\mathcal {H}_j\) are honest, set \(sk_j \leftarrow \bot \).

    • Else, verify that the provided \(sk_j\) is eligible by checking

      • \(\textsf{CheckTtdHonest}\)(\(sk_j\)) = 1 if \(\mathcal {H}_j\) is corrupt (iii) and \(\mathcal {M}_j\) is honest, or

      • \(\textsf{CheckTtdCorrupt}\)(\(sk_j\)) = 1 if \(\mathcal {M}_j\) is corrupt (iv).

    • Insert \(\langle \mathcal {M}_j, \mathcal {H}_j, sk_j \rangle \) into \(\texttt {ML} \) and output \((\textsf{JOINED}, sid , jsid)\) to \(\mathcal {H}_j\).

Sign

  1. 7.

    Sign Request. On input \((\textsf{SIGN}, sid , ssid, \mathcal {M}_j, msg, bsn)\) from host \(\mathcal {H}_j\),

    • If \(\mathcal {I}\) is honest and no entry \(\langle \mathcal {M}_j, \mathcal {H}_j, *\rangle \) exists in \(\texttt {ML} \), abort.

    • Create a sign session record \(\langle ssid, \mathcal {M}_j, \mathcal {H}_j, msg, bsn, status \rangle \) with \( status \leftarrow request \).

    • Output \((\textsf{SIGNSTART}, sid , ssid, l(msg, bsn), \mathcal {M}_j, \mathcal {H}_j)\) to \(\mathcal {S}\).

  2. 8.

    Sign Request Delivery. On input \((\textsf{SIGNSTART}\), \( sid \), ssid) from \(\mathcal {S}\),

    • Update the session record \(\langle ssid, \mathcal {M}_j, \mathcal {H}_j, msg, bsn, status \rangle \) to \( status \leftarrow delivered \).

    • Output \((\textsf{SIGNPROCEED}, sid , ssid, msg, bsn)\) to \(\mathcal {M}_j\).

  3. 9.

    Sign Proceed. On input \((\textsf{SIGNPROCEED}, sid , ssid)\) from \(\mathcal {M}_j\),

    • Look up record \(\langle ssid, \mathcal {M}_j, \mathcal {H}_j, msg, bsn, status \rangle \) with \( status = delivered \).

    • Output \((\textsf{SIGNCOMPLETE}, sid , ssid)\) to \(\mathcal {S}\).

  4. 10.

    Signature Generation. On input \((\textsf{SIGNCOMPLETE}\), \( sid \), ssid, \(\varSigma )\) from \(\mathcal {S}\),

    • If \(\mathcal {M}_j\) and \(\mathcal {H}_j\) are honest, ignore the adversary’s signature and internally generate the signature for a fresh or established \(sk_j\):

      • If \(bsn \ne \bot \), retrieve \(sk_j\) from \(\langle \mathcal {M}_j, bsn, sk_j \rangle \in \texttt {DomainKeys} \) for \((\mathcal {M}_j, bsn)\). If no such \(sk_j\) exists or \(bsn = \bot \), set \(sk_j \leftarrow {\textsf{ukgen}}()\). Check \(\textsf{CheckTtdHonest}\)(\(sk_j) = 1\) (v) and store \(\langle \mathcal {M}_j, bsn, sk_j \rangle \) in \(\texttt {DomainKeys} \).

      • Compute signature as \(\varSigma \leftarrow \textsf{sign}\) \((sk_j, msg, bsn)\) and check \({\textsf{verify}}\) \((\varSigma , msg\), \(bsn) = 1\) (vi).

      • Check \({\textsf{identify}}(\varSigma , msg, bsn\), \(sk_j) = 1\) (vii) and check that there is no \(\mathcal {M}_j' \ne \mathcal {M}_j\) with key \(sk'_j\) registered in \(\texttt {ML} \) or \(\texttt {DomainKeys} \) such that \({\textsf{identify}}(\varSigma , msg\), \(bsn, sk'_j) = 1\) (viii).

    • If \(\mathcal {M}_j\) is honest, store \(\langle \varSigma , msg, bsn, \mathcal {M}_j \rangle \) in \({\texttt {Signed} }\).

    • Output \((\textsf{SIGNATURE}, sid , ssid, \varSigma )\) to \(\mathcal {H}_j\).

Verify

  1. 11.

    Verify. On input \((\textsf{VERIFY}, sid , msg, bsn, \varSigma , {\textbf {keyRL}}, {\textbf {linkRL}} )\) from some party \(\mathcal {V}\),

    • Retrieve all pairs \((sk_{j}, \mathcal {M}_j)\) from \(\langle \mathcal {M}_j, *, sk_{j} \rangle \in \texttt {ML} \) and \(\langle \mathcal {M}_j, *, sk_{j} \rangle \in \texttt {DomainKeys} \) where \({\textsf{identify}}(\varSigma , msg, bsn, sk_{j}) = 1\). Set \(f \leftarrow 0\) if at least one of the following conditions holds:

      • More than one key \(sk_j\) was found (ix).

      • \(\mathcal {I}\) is honest and no pair \((sk_j, \mathcal {M}_j)\) was found (x).

      • There is an honest \(\mathcal {M}_j\) but no entry \(\langle *, msg, bsn, \mathcal {M}_j \rangle \in {\texttt {Signed} }\) exists (xi).

      • There is a \(sk'_u \in {\textbf {keyRL}}\) where \({\textsf{identify}}(\varSigma , msg, bsn, sk'_u) = 1\) and no pair \((sk_j, \mathcal {M}_j)\) for an honest \(\mathcal {M}_j\) was found, or there exists \((slt',msg', bsn') \in {\textbf {linkRL}}\) such that \({\textsf{identify}}(slt', msg', bsn', sk_j) = 1\). (xii).

    • If \(f \ne 0\), set \(f \leftarrow {\textsf{verify}}(\varSigma , msg, bsn)\) (xiii).

    • Add \(\langle \varSigma , msg, bsn, {\textbf {keyRL}}, {\textbf {linkRL}}, f \rangle \) to \(\texttt {VerResults} \), output \((\textsf{VERIFIED}, sid , f)\) to \(\mathcal {V}\).

Link

  1. 12.

    Link. On input \((\textsf{LINK}, sid , \varSigma , msg, \varSigma ', msg', bsn\ne \bot )\) from some party \(\mathcal {V}\),

    • Output \(\bot \) to \(\mathcal {V}\) if at least one signature tuple \((\varSigma , msg, bsn)\) or \((\varSigma ', msg', bsn)\) is not valid (verified via the verify interface with \({\textbf {keyRL}} = \emptyset \) and \({\textbf {linkRL}}=\emptyset \)) (xiv).

    • For each \(sk_i\) in \(\texttt {ML} \) and \(\texttt {DomainKeys} \) compute \(b_i \leftarrow {\textsf{identify}}(\varSigma , msg, bsn, sk_i)\) and \(b_i' \leftarrow {\textsf{identify}}(\varSigma ', msg', bsn, sk_i)\) and do the following:

      • Set \(f\leftarrow 0\) if \(b_i \ne b'_i\) for some i (xv).

      • Set \(f \leftarrow 1\) if \(b_i = b_i' = 1\) for some i (xvi).

    • If f is not defined yet, set \(f \leftarrow {\textsf{link}}(\varSigma , msg, \varSigma ', msg', bsn)\).

    • Output \((\textsf{LINK}, sid , f)\) to \(\mathcal {V}\).

We highlight that our model catches all the security requirements discussed in Sect. 2.3 (correctness, anonymity and non-frameability):

  • The correctness of our scheme is guaranteed in our model. When an honest signer (including both the TPM and Host) successfully creates a signature, honest Verifiers will always accept this signature. This is due to the checks v, vi, vii, and viii performed by \(\mathcal {F}\) in the Sign interface.

  • The anonymity in our scheme is also guaranteed by \(\mathcal {F}\) due to the random choice of \(sk_j\) that will be later used for the construction of DAA signatures as part of the Sign interface. In the case of corrupt devices, the Simulator is allowed to provide a signature that will convey the signer’s identity, as the signing key can be extracted from the respective device key pair. This reflects that the anonymity of the DAA signer is guaranteed if both the TPM and the Host are honest.

  • The non-frameability property guarantees that a signature created by an adversary cannot be linked to a legitimate signature created by the target device, this is due to the check ix in our model. CheckTtdHonest prevents registering an honest \(sk_j\) in the Join interface that matches an existing signature so that conflicts can be avoided and signatures can always be traced back to the original signer. This ensures that honest signers are not revoked due to the identify algorithm being deterministic in our model. Consider an adversary aiming to create a signature on a message that has not been signed by an honest device, checks x and xi in the Verify interface ensure the scheme unforgeability property, which dictates that it is computationally infeasible to maliciously forge signatures.

7 UC Security Proof of the DAA Scheme

7.1 High-Level Description of Our Proof

We start with the real-world protocol execution in Game 1. In the next game, we construct one entity C that runs the real-world protocol for all honest parties. Then we split C into two pieces, an ideal functionality \(\mathcal {F}\) and a simulator \(\mathcal {S}\) that simulates the real-world parties. Initially, we start with an “empty” functionality \(\mathcal {F}\). With each game, we gradually change \(\mathcal {F}\) and update \(\mathcal {S}\) accordingly, moving from the real world to the ideal world, and culminating in the full ideal functionality \(\mathcal {F}\) being realized as part of the ideal world, thus, proving our proposed security model presented in Sect. 6. The endmost goal of our proof is to prove the indistinguishability between Game 1 and Game 16, i.e., between the complete real world and the fully functional ideal world. This is done by proving that each game is indistinguishable from the previous one. We use the “\(\approx \)” sign to express games indistinguishably between games.

The ideal functionality \(\mathcal {F}\) is introduced in Game 3; at this stage \(\mathcal {F}\) only forwards its inputs to the simulator \(\mathcal {S}\) who simulates the real world. From Game 4 onward, \(\mathcal {F}\) starts executing the setup interface on behalf of the Issuer. Moving on to Game 5, \(\mathcal {F}\) handles simple verification and identification checks without performing any detailed checks at this stage; i.e., it only checks if the signer belongs to a revocation list separately. In Games 6–8, \(\mathcal {F}\) executes the Join interface while performing checks to keep the consistency of registered keys. It also adds checks that allow only the signers that have successfully been enrolled to create signatures. Game 9 proves the anonymity of our protocol by letting \(\mathcal {F}\) handle the sign queries on behalf of honest signers. To do this, \(\mathcal {F}\) creates signatures using freshly generated random keys instead of running the signing algorithm using the signer’s signing key. At the end of this game, we prove that by relying on the ZKP constructions, an external environment will notice no change from previous games where the real-world Sign algorithm was executed. Now moving to Games 10–16, we let \(\mathcal {F}\) perform all other checks that are explained in Sect. 6.

7.2 The DAA Scheme Proof

Due to the limited space, we provide a sketch of the security proof of the proposed DAA protocol, including a sequence of games based on the model of Camenish et al. in [16]. A detailed proof will be given in the full paper. The proof in [16] is constructed under the Discrete Logarithm (DL) and Decisional Diffie-Hellman (DDH) assumptions and the unforgeability of the Camenisch-Lysyanskaya (CL) signatures. Other DAA signatures such as [24, 35] are proved based on lattice hard problems, namely Ring-LWE and Ring-SIS, and the unforgeability is supported on the modified Boyen or Dilithium signature scheme [6, 32]. In contrast to the previous DAA schemes, our game indistinguishability is based on the perfect simulation of the MPCitH-based NIZK proofs, the soundness, completeness and zero-knowledge properties of the proofs \(\pi _{\mathcal {I}}\) and \(\pi _D\), the unforgeability of the F-SPHINCS+ signature scheme, and the security properties of the tweakable hash functions, \(H_1\), \(H_2\) and \(H_3\), and the pseudorandom function F. The sequence of games is as follows:

Proof

(sketch)

Game 1: (Real-World execution of the protocol): This is the start.

Game 2: (Introducing C): An entity C is introduced; C receives all inputs from the parties and simulates the real-world protocol for them. This is equivalent to Game 1.

Game 3: (Reconstruction of C): We now split C into two parts, \(\mathcal {F}\) and \(\mathcal {S}\), where \(\mathcal {F}\) behaves as an ideal functionality. \(\mathcal {F}\) receives all the inputs and forwards them to \(\mathcal {S}\), who simulates the real-world protocol for honest parties, and sends the outputs to \(\mathcal {F}\). \(\mathcal {F}\) then forwards the outputs to the environment \(\mathcal {E}\). This game is simply Game 2 but with different structure, so Game 3 \(\approx \) Game 2.

Game 4: (\(\mathcal {F}\) handles the setup queries): \(\mathcal {F}\) now behaves differently in the setup interface and stores the algorithms for the issuer \(\mathcal {I}\). \(\mathcal {F}\) also does checks to ensure that the structure of \( sid \) is correct for an honest \(\mathcal {I}\), and aborts if not. In case \(\mathcal {I}\) is corrupt, \(\mathcal {S}\) extracts the secret key for \(\mathcal {I}\) and proceeds in the setup interface on behalf of \(\mathcal {I}\). Clearly \(\mathcal {E}\) will notice no change, so Game 4 \(\approx \) Game 3.

Game 5: (\(\mathcal {F}\) handles the verification and linking queries): \(\mathcal {F}\) now performs the verification and linking checks instead of forwarding them to \(\mathcal {S}\). There are no protocol messages and the outputs are exactly as in the real-world protocol. However, the only difference is that the verification algorithm used by \(\mathcal {F}\) does not contain a revocation check. \(\mathcal {F}\) performs this check separately thus the outcomes are equal, so Game 5 \(\approx \) Game 4.

Game 6: (\(\mathcal {F}\) handles the join queries): The join interface of \(\mathcal {F}\) is now changed, and \(\mathcal {F}\) stores the joined member information in the Member List ML. If \(\mathcal {I}\) is honest, \(\mathcal {F}\) stores the secret key \(sk_u\), extracted from \(\mathcal {S}\), for corrupt TPM’s. \(\mathcal {S}\) always has enough information to simulate the real-world protocol except when the issuer is the only honest party. In this case, \(\mathcal {S}\) does not know who initiated the join since the host does not authenticate towards the issuer in the real world, so \(\mathcal {S}\) can’t make a join query with \(\mathcal {F}\) on a corrupt host’s behalf. Thus, to deal with this case, \(\mathcal {F}\) can safely choose any corrupt host and put it into \(\texttt {ML} \), the identities of hosts are only used to create signatures for platforms with an honest TPM or honest host, so fully corrupted platforms do not matter. In the only case, where the TPM has already been registered in \(\texttt {ML} \), \(\mathcal {F}\) may abort the protocol, but \(\mathcal {I}\) should have already tested this case before continuing with the query JOINPROCEED, hence \(\mathcal {F}\) will not abort. Thus in all cases, \(\mathcal {F}\) and \(\mathcal {S}\) can interact to simulate the real-world protocol, so Game 6 \(\approx \) Game 5.

Game 7: (\(\mathcal {F}\) knows bsn and msg to be signed or l(msgbsn)): \(\mathcal {F}\) now no longer informs \(\mathcal {S}\) about the message and the basename that are being signed. If the whole signer is honest, \(\mathcal {S}\) can learn nothing about the message msg and the basename bsn. Instead, \(\mathcal {S}\) knows only the leakage l(msgbsn). To simulate the real world, \(\mathcal {S}\) chooses a pair \((msg',\ bsn')\) such that \(l(msg', bsn')\)=l(msgbsn). Therefore Game 7 \(\approx \) Game 6.

Game 8: (\(\mathcal {F}\) performs pre-sign checks): If \(\mathcal {I}\) is honest, \(\mathcal {F}\) only allows the signer that has joined to sign. An honest host will always check whether it has joined with its TPM in the real-world protocol, so no difference for honest hosts. Also, an honest TPM only signs when it has joined with the host before. In the case that an honest \( \mathcal {M}_{i}\) performs a join protocol with a corrupt host \(\mathcal {H}_{j}\) and the honest issuer, the simulator \(\mathcal {S}\) will make a join query with \(\mathcal {F}\), to ensure that \( \mathcal {M}_{i}\) and \(\mathcal {H}_{j}\) are in \(\texttt {ML} \). Therefore, Game 8 \(\approx \) Game 7.

Game 9: (\(\mathcal {F}\) handles the sign queries, i.e., simulating the TPM without knowing its secret): In this game, \(\mathcal {F}\) creates anonymous signatures for honest signers by running the algorithms defined in the setup interface. Let us start by defining Game \(9.k.k'\), in this game \(\mathcal {F}\) handles the first \( k'\) signing inputs of \({\mathcal {M}}_{k}\), and subsequent inputs are then forwarded to \(\mathcal {S}\). For \( i<k\), \(\mathcal {F}\) handles all the signing queries with \({\mathcal {M}}_{i}\) using algorithms. For \(i>k\), \(\mathcal {F}\) forwards all signing queries with \({\mathcal {M}}_{i}\) to \(\mathcal {S}\) who creates signatures as before. Now from the definition of Game \(9.k.k'\), we note that Game 9.0.0 = Game 8. For increasing \( k'\), Game \(9.k.k'\) will be at some stage equal to Game \(9.k+1.0\), this is because there can only be a polynomial number of signing queries to be processed. Therefore, for large enough k and \( k'\), \(\mathcal {F}\) handles all the signing queries of all TPMs, and Game 9 is indistinguishable from Game \(9.k.k'\). We want to prove now that Game \(9.k.k'+1\) is indistinguishable from Game \(9.k.k'\). Suppose that there exists an environment that can distinguish a signature of an honest party using \(sk_{u}\) from a signature using a different \(sk'_u\), then the environment can break the pseudorandom property of the function F.

The first \(j \le k'\) signing queries on behalf of \( {\mathcal {M}}_{k}\) are forwarded by \(\mathcal {F}\) to \(\mathcal {S}\), which calls the real-world protocol. Now suppose that \(\mathcal {E}\) is given tuples \(\varSigma = (str, slt, com, \pi _\texttt{D})\) and it is challenged to decide if \(\varSigma = (str, slt, com, \pi _\texttt{D})\) is calculated from uniform random \(r \leftarrow \{0,1\}^{n}\) or from a certified TPM secret key \(sk_u\). In the reduction, we have to be able to simulate the TPM without knowing the secret \(sk_{u}\). The issuer’s zero-knowledge proof \(\pi _{\mathcal {I}}\) for the correctness of the master secret and public key pair allows the simulator \(\mathcal {S}\) extracts the master secret key. Furthermore, the zero-knowledge proof of the group membership credential \(\pi _\texttt{D}\) helps \(\mathcal {S}\) extract the TPM’s secret key \(sk_u\) for corrupt TPM and create signatures on behalf of the TPM as in the real world scenario. Let r be a randomly sampled key from \(\{0,1\}^{n}\) that will be used to generate signatures on behalf of honest TPMs rather than using the real TPM secret key \(sk_{u}\). Since the issuer’s secret key msk can be extracted from \(\pi _{\mathcal {I}}\) due to the soundness of the proof \(\pi _{\mathcal {I}}\) and getting access to \(\mathcal {F}_{crs}\), then a credential can be created on \(et'_u=F(r,~gid)\) by running the signing algorithm of F-SPHINCS+, sign\((et'_u, msk, gp)\). After getting a credential on \(et'_u\), slt and sst are calculated as functions of r, i.e. \(slt=F(r,~lid)\) and \(sst=F(r,~sid)\). Then all other parts of the signature follow exactly the same as the real-world protocol (i.e. when using the TPM’s \(sk_u\)). The commitment com is calculated as our defined sign algorithm and the proof \(\pi _\texttt{D}\) can then be perfectly simulated using the random secret r. Due to the zero-knowledge property of the proof \(\pi _\texttt{D}\) and the pseudorandom outputs of the function F, we argue that an external environment cannot distinguish between 1) a signature generated using the TPM’s \((sk_u, et_u)\). 2) a signature generated by a random \((r, et'_u)\). Therefore, Game 9 \(\approx \) Game 8.

Game 10 (\(\mathcal {F}\) performs key consistency checks): When storing a new \(sk_u\), \(\mathcal {F}\) checks \(\textsf{CheckTtdHonest}\)(\(sk_u)=1\) or \(\textsf{CheckTtdCorrupt}\)(\(sk_u)=1\). We want to show that these checks will always pass. In fact, valid signatures always satisfy \(slt=F(sk_u, lid)\), \(et_u=F(sk_u, gid)\), \((gr_u,{\textbf {S}})\leftarrow \textsf {F-SPHINCS+.sign}(et_u,msk,gp)\) and \(sst=F(sk_u, sid)\). By the soundness property of \(\pi _\texttt{D}\), there exists only one secret \(sk_u\) satisfying the slt construction, and there exists one sst that matches this signature by the soundness of the \(hk=\textsf {MPC\_H1}(\llbracket sst \rrbracket )\). Thus, \({\textsf{CheckTtdCorrupt}}(sk_u)=1\) will always give the correct output. On the other hand, the keys for honest TPMs are chosen uniformly at random from an exponentially large group \(\{0, 1\}^{n}\), due to the large min-entropy of the uniform distribution the probability that sampling a selected \(sk_u\) is negligible for large n with probability equal to \(1/{2^n}\), thus with overwhelming probability, there does not exist a signature already using the same \(sk_u\), which implies that \({\textsf{CheckTtdHonest}}(sk_u) =1\) will always give the correct output. Hence, Game 10 \(\approx \) Game 9.

Game 11: (\(\mathcal {F}\) checks the correctness of the protocol): In this game \(\mathcal {F}\) checks that any honestly generated signature \(\varSigma = (str, slt, com, \pi _\texttt{D})\) is always valid due to the completeness property of \(\pi _\texttt{D}\) and the correctness of the F-SPHINCS+ signature. A valid proof \(\pi _\texttt{D}\) on the credential ensures that the credential has the correct structure, follows the correct authentication path, and always leads to the issuer’s public key rpk due to the soundness of \(\pi _\texttt{D}\) and the correctness of the F-SPHINCS+ signature. Second, \(\mathcal {F}\) makes sure \({\textsf{identify}}(\varSigma , msg, bsn, sk_u) = 1\), this is also achieved in the real-world protocol due to the soundness of \(\pi _\texttt{D}\). \(\mathcal {F}\) checks, using its internal records \(\texttt {ML} \) and \(\texttt {DomainKeys} \) that honest users are not sharing the same secret key \(sk_u\). If there exists a key \(sk'_u \ne sk_{u}\) in \(\texttt {DomainKeys} \) such that \(slt=F(sk'_u, lid)= F(sk_u, lid)\), then this breaks the collision resistance property of the function F. Therefore Game 11 \(\approx \) Game 10.

Game 12 (\(\mathcal {F}\) checks that valid signatures are deterministic): Add Check (ix) to ensure that there are no multiple \(sk_{u}\) values matching to one signature. A signature \(\varSigma \) includes \( slt = F(sk_u,~lid)\), \(com = H_1(F(sk_u,~sid)||pk_h|| \cdots ||rpk)\) and \(\pi _\texttt{D}\). Due to the soundness of the function F and the proof \(\pi _\texttt{D}\), and also due to the collision resistance and second-preimage properties of \(H_1\), two different keys cannot create the same signature and two different signatures cannot share the same \( sk_u\). Therefore a valid signature should be identified to one \(sk_u\) only. Hence, Game 12 \(\approx \) Game 11.

Game 13 (\(\mathcal {F}\) checks the unforgeability of the credential): To prevent accepting a signature that was not generated by using a group membership credential issued by an honest issuer, \(\mathcal {F}\) adds Check (x). A credential is an F-SPHINCS+ signature on \(mt_u||idx\), using the tweakable hash functions \(H_1\), \(H_2\) and \(H_3\). Following the proof of Theorem 1 in Sect. 4, the F-SPHINCS+ signature scheme is unforgeable due to the security properties of \(H_1\), \(H_2\) and \(H_3\), so this check is always passed and Game 13 \(\approx \) Game 12.

Game 14 (\(\mathcal {F}\) checks the unforgeability of signatures): Check (xi) is added to \(\mathcal {F}\) to prevent an adversary from forging signatures using honest signer’s credential key \(gsk_u=(sk_u, ~gr_u,~{\textbf {S}})\). As discussed before, a DAA signature \(\varSigma \) is proof of the correct construction of slt, com and \(\pi _\texttt{D}\), which form a NIZK proof of an F-SPHINCS+ signature associated with a single key \(sk_u\). If the signature is verified, due to the unforgeability of F-SPHINCS+, the binding property of the commitment scheme used to generate \(com=H_1(F(sk_u, sid)||pk_h|| \cdots ||rpk)\), and the soundness of the function F used to compute slt and com, \(sk_u\) belonging to an honest TPM must be involved. If the adversary uses a different key \(sk'_u\) to create this signature. Due to the soundness of \(\pi _\texttt{D}\) analyzed in Sect. 5, the proof \(\pi _\texttt{D}\) cannot be simulated with overwhelming probability unless \(sk'_u= sk_u\), so Game 14 \(\approx \) Game 13.

Game 15 (\(\mathcal {F}\) checks the correct revocation): Check (xii) is added to \(\mathcal {F}\) to ensure that an honest TPM with \(sk_u\) are not being revoked. If there exists a matching revoked key \({sk_u^*} ~(\ne sk_u) \in {\textbf {keyRL}}\) such that \(slt = F(sk_u^*, lid) = F(sk_u, lid)\), then this breaks the collision resistance property of the function F. For the same reason, there does not exist \((slt', msg', bsn') \in {\textbf {linkRL}}\) such that \(slt' = F(sk'_u, lid') = F(sk_u, lid')\) and \(sk'_u \ne sk_u\). Therefore, our protocol ensures the correct revocation. So Game 15 \(\approx \) Game 14.

Game 16 (\(\mathcal {F}\) checks the linkability): Checks (xv and xvi) of the ideal functionality \(\mathcal {F}\) that are related to link queries are now included. The output of \(\mathcal {F}\) based on these checks is still consistent with the output which the link algorithm would give: If there is an \(sk_u\) that matches two signatures signed under the same bsn, by the soundness of \(\pi _\texttt{D}\) we have that the pseudonyms based on the same \(sk_u\) must be equal, resulting in link outputting 1. If there is an \(sk_u\) that matches one signature but not the other, by the soundness of \(\pi _\texttt{D}\) we have that the pseudonyms slt that are not generated using \(sk_u\) must also differ from those generated by a different key \(sk'_u \ne sk_u\) which results in link outputting 0. Therefore, Game 16 \(\approx \) Game 15. This concludes the proof.

8 Conclusion

This paper proposes the first DAA scheme from symmetric primitives and this scheme has some interesting features. We make use of a modified SPHINCS+ signature as a group membership credential and use of a Picnic-style signature to prove the possession of that credential. Our DAA scheme splits the signer role between a TPM and its host and allows the TPM to have a much smaller workload than the host. This scheme can handle a large group size (up to \(2^{60}\)), which is suitable for rapidly increasing trusted computing applications. This research topic is still in its early stage. Improving the performance of this DAA scheme is challenging and it will be possible if either a more efficient stateless hash-based signature scheme than F-SPHINCS+ or an efficient Picnic-style signature scheme is developed.