Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Digital signatures are an important cryptographic tool to assert the authenticity (source) and integrity of digital content. By virtue of these desired properties, every alteration of signed data necessarily yields an invalidation of the original signature. If one, however, considers handwritten signatures on paper documents, there are various scenarios where the handwritten signature is still visible (source authentication is still given), but the document contains several blacked-out sections. These sections are not readable anymore and thus remain confidential. Examples for such sanitized documents are the public release of previously classified governmental documents, legal subpoenas for documents in court trials or documents for medical or biomedical research [1, 13, 15].

It is clear that conventional digital signatures can not be used as a means for source authentication in such scenarios for the obvious reason. A naive solution would be to issue a fresh signature on a sanitized version of the respective document. However, this is often not possible (e.g., the signing key has already expired or is not available) or it is even undesirable (e.g., due to time or cost constraints).

1.1 Background on Sanitizable Signatures

To realize a controlled and limited sanitization of digitally signed content without signer-interaction, various approaches to so called sanitizable signatures have been introduced and refined over the years. Today, there are essentially two flavors of sanitizable signatures. The first one focuses on removal (blacking-out) of designated parts not necessarily conducted by a designated party (could be everyone) and it covers redactable signatures [20], content-extraction signatures [29] and the sanitizable signatures in [23]. The second one focuses on replacement of designated parts conducted only by a designated party (the sanitizer) and covers sanitizable signatures as defined in [2] and follow up work [48, 26]. For a separation of these flavors we refer the reader to [22].

In addition to the motivating examples in the beginning, sanitizable signatures have shown to be a useful tool in various scenarios. Their applications include customizing authenticated multicast transmissions, database outsourcing (combating software piracy and unauthorized content distribution), remote integrity checking of outsourced data [14] and secure routing [2]. Moreover, they find applications in the context of public sector (open government) data [30], DRM licensing for digital content protection [11, 32], privacy protection in smart grids [24], privacy-aware management of audit-log data [19], health record disclosure [3] and anonymization [27], as well as identity management [28, 33]. On the more theoretical side, it has been shown how to build attribute-based anonymous credential systems from sanitizable signatures in a black-box fashion [12].

In this paper, we focus on sanitizable signatures in the vein of Ateniese et al. [2]. The basic idea behind such a scheme is that a message is split into fixed and modifiable (admissible) blocks, where each admissible block is replaced by a chameleon hash (a trapdoor collision resistant hash) of this block, and the concatenation of all blocks is then signed. A sanitizer being in possession of the trapdoor, can then change each admissible block arbitrarily by computing collisions. Such a sanitizable signature scheme needs to satisfy (1) unforgeability, which says that no one except the honest signer and sanitizer can create valid signatures and sanitizations respectively, (2) immutability, which says that a malicious sanitizer must not be able to modify any part of the message which has not been specified as admissible by the signer, (3) privacy, which says that all sanitized information is unrecoverable for anyone except signer and sanitizer, (4) transparency, which says that signatures created by the signer or the sanitizer are indistinguishable, and (5) accountability, which requires that a malicious signer or sanitizer is not able to deny authorship. These security properties have later been rigorously defined in [4], where it is also shown that accountability implies unforgeability, transparency implies privacyFootnote 1 and all other properties are independent. Later, the property of (strong) unlinkability [6, 8] as an even stronger privacy property has been introduced. Additionally, other properties such as (blockwise) non-interactive public accountability [7] have been proposed and the model has also been extended to cover several signers and sanitizers simultaneously [10].

1.2 Motivation for this Work

With sanitizable signatures, admissible blocks can be replaced arbitrarily by the sanitizer. However, this often makes sanitizers too powerful and thus may limit their applicability in various scenarios severely. To reduce the sanitizers’ power, Klonowski and Lauks [21] introduced several extensions for sanitizable signatures, which allow to limit the power of a sanitizer in several ways and thus eliminate the aforementioned concerns. In particular, they have introduced extensions (1) limiting the set of possible modifications for an admissible block (\(\mathtt{LimitSet}\)), (2) forcing the sanitizer to make the same changes in logically linked admissible blocks (\(\mathtt{EnforceModif}\)), (3) limiting the sanitizer to modify at most k out of n admissible blocks (\(\mathtt{LimitNbModif}\)) and (4) forcing the sanitizer to construct less than \(\ell \) versions of a message (\(\mathtt{LimitNbSanit}\)). Later, Canard and Jambert [9] extended the security model of Brzuska et al. [4] to cover the aforementioned extensions (as [21] did not provide any model or proofs).

The LimitSet Extension. Although all of the aforementioned features improve the applicability of sanitizable signatures, we deem the LimitSet extension to be the generally most useful one (besides, it is the only extension that is related to the privacy property). Thus, in the remainder of this paper, we only consider the LimitSet extension and refer to schemes that implement this extension as extended sanitizable signature schemes (ESSS). In existing constructions, LimitSet is realized by using cryptographic accumulators, a primitive that allows to succinctly represent a set (as a so called accumulator) and to compute witnesses certifying membership for elements in the set. Basically, the set of admissible changes for such a block is accumulated and the admissible block is replaced by the respective accumulator. Loosely speaking, the signer initially provides an element together with the witness and sanitizing simply requires the sanitizer to exchange this element and the witness.

How to Define Privacy? Recall that for sanitizable signatures without extensions, privacy means that it should not be possible to recover the original message from a sanitized version. Now, what is the most reasonable definition for privacy given the LimitSet extension? It seems to be most natural to require that, given a (sanitized) signature, a LimitSet block does not leak any information about the remaining elements in the respective set (and thus no information about the original message). By carefully inspecting the security model for \(\mathsf{ESSS}\) in [9], we, however, observe that their privacy definition does not capture this. In fact, an \(\mathsf{ESSS}\) that reveals all elements of the sets corresponding to LimitSet blocks will be private in their model. One motivation for a weak definition of privacy in [9] might have been to preserve the implication from (proof-restricted) transparency (as in the original model from [4]). However, as it totally neglects any privacy guarantees for the LimitSet extension, a stronger privacy notion seems advantageous and often even required. In [6, 8] a stronger notion of privacy for sanitizable signatures—called (strong) unlinkability—has been introduced. This notion, when adapted to \(\mathsf{ESSS}\), indeed guarantees what we want to achieve. Yet, unlinkability induces a significant overhead for constructions supporting the LimitSet extension. As we will see later, the only unlinkable construction that supports the LimitSet extension [12] is rather inefficient and is only proven secure in a customized model which does not consider all security requirements of sanitizable signatures and thus does not represent an \(\mathsf{ESSS}\). In general, as we will show later, efficient unlinkable constructions of \(\mathsf{ESSS}\) seem hard to achieve. Taking all together we conclude that, while the notion of privacy in [9] seems to be too weak, unlinkability seems to be too strong. Subsequently, we motivate why a stronger privacy notion (inbetween these two notions) that still allows to obtain efficient instantiations is however important for practical applications.

Motivating Applications. We consider use cases where it is required to limit the sanitizers abilities, while at the same time providing privacy with respect to verifiers. For instance, consider authenticity preserving workflows that span multiple enterprises. Using \(\mathsf{ESSS}\) they can be modeled as illustrated in Fig. 1, with a signer and a sanitizer per enterprise. Then, employees can—within some well defined boundaries—act (in the role of the sanitizer) on behalf of their company, while also being accountable for their actions. However, companies do not disclose sensitive business internals.

Fig. 1.
figure 1

Modeling a workflow using \(\mathsf{ESSS}\)

As a concrete example for such a workflow, envision that a bank signs a document containing a LimitSet block with authorized financial transactions for some company once every day. An employee of this company is then able to demonstrate the authorization of single transactions to subsequent enterprises via sanitization, while not being able to maliciously introduce new transactions. The company will definitely want that employees can be held accountable for revealing certain transactions and that transactions which were never revealed by sanitized versions of the orignal document remain concealed. Observe, that an \(\mathsf{ESSS}\) being private according to [9] could reveal sensitive business internals upon signature verification (i.e., the unused transaction information). Another use case is the anonymization of (medical) data before publishing it, e.g., instead of removing the entire address information of some individual, one can replace the precise address with some larger region. To do so, one could define an admissible set with two elements being the precise address and the region. This would greatly help to automate the sanitization and to reduce errors, which, in turn, improves the quality of sanitized documentsFootnote 2. Likewise to the previous example, an \(\mathsf{ESSS}\) which is private according to the definition in [9] would allow to reconstruct the precise address from a sanitized document.

1.3 Contribution

In this paper we take a closer look at the privacy definition for \(\mathsf{ESSS}\) in [9] as well as the unlinkability definitions in [6, 8] when applied to the security model for \(\mathsf{ESSS}\). We conclude that these notions are either not strict enough to cover the requirements outlined in the previous section or too strict to obtain practical schemes. To this end, we introduce a stronger notion of privacy—denoted strong privacy—which explicitly considers privacy issues related to the \(\mathtt{LimitSet}\) extension. More precisely, our strengthened notion guarantees that the sets of allowed modifications remain concealed, while still allowing efficient instantiations. We show that privacy is strictly weaker than strong privacy and that unlinkability is strictly stronger than strong privacy. Most importantly, we show that efficient and secure ESSS providing strong privacy can be constructed in a black-box way from any sanitizable signature scheme that is secure in the models of [4, 18]. We do so by proposing (1) a generic conversion of sanitizable signatures to \(\mathsf{ESSS}\) which support the \(\mathtt{LimitSet}\) extension and (2) showing that instantiating the \(\mathtt{LimitSet}\) extension in this generic conversion with indistinguishable accumulators (as introduced in [16]) yields constructions that provide strong privacy.

2 Preliminaries and Notation

For the sake of compact notation, we often use the concatenation operator, e.g., \((a_i)_{i=1}^n||(b_i)_{i=1}^m := (a_1, \dots , a_n, b_1, \dots , b_m)\) and assume that concatenated sequences can later be uniquely decomposed (even when concatenating elements of different types and lengths). Let denote the operation that picks an element x uniformly at random from a finite set X. A function \(\epsilon : \mathbb {N}\rightarrow \mathbb {R}^+\) is called negligible if for all \(c > 0\) there is a \(k_0\) such that \(\epsilon (k) < 1/k^c\) for all \(k > k_0\). In the remainder of this paper, we use \(\epsilon \) to denote such a negligible function.

2.1 (Indistinguishable) Accumulators

Accumulators allow to represent a finite set \(\mathcal X\) of values as a single succinct accumulator \(\mathsf{acc}_\mathcal{X}\). For each value \(x \in \mathcal X\), one can efficiently obtain a membership witness \(\mathsf{wit}_{x}\) that certifies the membership of x in \(\mathsf{acc}_\mathcal{X}\), while this is infeasible for values \(y \notin \mathcal X\) (collision freeness). Indistinguishable accumulators [16] additionally require that neither the accumulator nor corresponding witnesses leak information about the accumulated set. Subsequently, we use the basic model for static accumulators from [16] and note that in general a trusted setup is assumed (i.e., \(\mathsf{AGen}\) is run by a TTP that discards the trapdoor \(\mathsf{sk}_\mathsf{acc}\)). However, if the party maintaining \(\mathsf{acc}_{\mathcal {X}}\) is trusted, as it is the case within sanitizable signatures, using \(\mathsf{sk}_\mathsf{acc}\) may be useful as it typically supports more efficient computations (the parameter \(\mathsf{sk}_\mathsf{acc}^\sim \) denotes the optional trapdoor, i.e., using the trapdoor does not influence the output distributions of the algorithms and all algorithms also run without \(\mathsf{sk}_\mathsf{acc}\)).

Definition 1

(Accumulator [16] ). An accumulator is a tuple of PPT algorithms (AGen, AEval, AWitCreate, AVerify) which are defined as follows:

  • AGen ( \(1^\kappa , t\) ): This algorithm takes a security parameter \(\kappa \) and a parameter t. If \(t \ne \infty \), then t is an upper bound for the number of accumulated elements. It returns a key pair \((\mathsf{sk}_\mathsf{acc},\) \(\mathsf{pk}_\mathsf{acc})\), where \(\mathsf{sk}_\mathsf{acc}=\emptyset \) if no trapdoor exists.

  • AEval ( \((\mathsf{sk}_\mathsf{acc}^\sim ,\) \(\mathsf{pk}_\mathsf{acc})\), \(\mathcal {X}\) ): This (probabilistic)Footnote 3 algorithm takes a key pair \((\mathsf{sk}_\mathsf{acc}^\sim ,\) \(\mathsf{pk}_\mathsf{acc})\) and a set \(\mathcal {X}\) to be accumulated and returns an accumulator \(\mathsf{acc}_{\mathcal {X}}\) together with some auxiliary information \(\mathsf{aux}\).

  • AWitCreate ( \((\mathsf{sk}_\mathsf{acc}^\sim ,\) \(\mathsf{pk}_\mathsf{acc})\), acc \(_{\mathcal {X}}\), \(\mathsf{aux}\), x ): This algorithm takes a key pair \((\mathsf{sk}_\mathsf{acc}^\sim ,\) \(\mathsf{pk}_\mathsf{acc})\), an accumulator \(\mathsf{acc}_{\mathcal {X}}\), auxiliary information \(\mathsf{aux}\) and a value x. It returns \(\bot \), if \(x\notin \mathcal {X}\), and a witness \(\mathsf{wit}_{x}\) for x otherwise.

  • AVerify ( \(\mathsf{pk}_\mathsf{acc}\), acc \(_{\mathcal {X}}\), \(\mathsf{wit}_{x}\), x ): This algorithm takes a public key \(\mathsf{pk}_\mathsf{acc}\), an accumulator \(\mathsf{acc}_{\mathcal {X}}\), a witness \(\mathsf{wit}_{x}\) and a value x. It returns \(\mathtt{true}\) if \(\mathsf{wit}_{x}\) is a witness for \(x\in \mathcal {X}\) and \(\mathtt{false}\) otherwise.

A secure indistinguishable accumulator is correct, collision free and indistinguishable. We recall the definitions for collision freeness and indistinguishability belowFootnote 4.

Definition 2

(Collision Freeness). An accumulator is collision-free, if for all PPT adversaries \(\mathcal {A}\) there is a negligible function \(\epsilon (\cdot )\) such that:

figure a

where \(\mathsf{acc}_{}^* \leftarrow \mathsf{AEval}_{r^*}((\mathsf{sk}_{\mathsf{acc}_{}}, \mathsf{pk}_{\mathsf{acc}_{}}), \mathcal {X}^*)\). Here, \(\mathcal {O}^\mathsf{E}\) and \(\mathcal {O}^\mathsf{W}\) represent the oracles for the algorithms \(\mathsf{AEval}\) and \(\mathsf{AWitCreate}\), respectively. In case of randomized accumulators the adversary outputs randomness \(r^*\), which is omitted for deterministic accumulators. Likewise, the adversary can control the randomness r used by \(\mathcal {O}^\mathsf{E}\) for randomized accumulators. Thus \(\mathcal {O}^\mathsf{E}\) takes an additional input r.

Definition 3

(Indistinguishability). An accumulator is indistinguishable, if for all PPT adversaries \(\mathcal {A}\) there is a negligible function \(\epsilon (\cdot )\) such that:

figure b

where \(\mathcal {X}_0\) and \(\mathcal {X}_1\) are two distinct subsets of the accumulation domain. Here, \(\mathcal {O}^\mathsf{E}\) is defined as before, whereas \(\mathcal {O}^\mathsf{W}\) is restricted to queries for values \(x \in \mathcal {X}_0 \cap \mathcal {X}_1\). Furthermore, the input parameter \(\mathsf{aux}\) for \(\mathcal {O}^\mathsf{W}\) is kept up to date and is provided by the environment, since \(\mathcal {A}\) could trivially distinguish using aux.

It is obvious, that the notion of indistinguishability requires a randomized \(\mathsf{AEval}\) algorithm. We stress that [16] also provide a variant of indistinguishability, which adds this non-determinism to accumulators with a deterministic \(\mathsf{AEval}\) algorithm. To do so, an additional random value \(x_r\) from the accumulation domain is inserted into the accumulator upon \(\mathsf{AEval}\). This notion is called collision-freeness weakening (\(\mathsf{cfw}\)) indistinguishability, since collision freeness only holds for \(\mathcal{X} \cup \{x_r\}\) and not \(\mathcal X\).

3 Formalizing Extended Sanitizable Signatures

In this section, we present a formal model for ESSS. Our model can thereby be seen as a rigorous formalization of the model for ESSS presented in [9]. Additionally, we include the suggestions from [18], i.e., additionally consider forgeries where one only tampers with \(\mathsf{{ADM}}\). We stress that, when omitting the extensions regarding LimitSet and ADM, it is equivalent to the model of [4], which is generally considered as the standard model for sanitizable signature schemes.

Definition 4

(Message). A message \(\mathsf{m} = (\mathsf{m}_i)_{i=1}^n\) is a sequence of n bitstrings (message blocks).

Henceforth, we use \(\ell _i\) to refer to the (maximum) length of message block \(\mathsf {m}_i\) and assume an encoding that allows to derive \((\ell _i)_{i=1}^n\) from \(\mathsf {m}\).

Definition 5

(Admissible Modifications). Admissible modifications \(\mathsf{ADM}\) with respect to a message \(\mathsf{m} = (\mathsf{m}_i)_{i=1}^n\) are represented as a sequence \(\mathsf{ADM} = (\mathsf{B}_i)_{i=1}^n\), with \(\mathsf{B}_i \in \{\mathtt{fix, var, lim}\}\).

Here \(\mathsf{B}_i = \mathtt{fix}\) indicates that no changes are allowed, \(\mathsf{B}_i = \mathtt{var}\) indicates that arbitrary replacements are allowed, and \(\mathsf{B}_i = \mathtt{lim}\) indicates that the replacements are limited to a predefined set (\(\mathtt{LimitSet}\)).

Definition 6

(Set Limitations). Set limitations \(\mathsf{V}\) with respect to a message \(\mathsf{m} = (\mathsf{m}_i)_{i=1}^n\) and admissible modifications \(\mathsf{ADM} = (\mathsf{B}_i)_{i=1}^n\) are represented by a set \(\mathsf{V} = \{(i, \mathsf{M}_i) : \mathsf{B}_i = \mathtt{lim} ~\wedge ~ \mathsf{M}_i \subset \bigcup _{j =0}^{\ell _i} \{0,1\}^j\}\).

We use to denote that \(\mathsf{m'}\) can be derived from \(\mathsf{m}\) under \(\mathsf{ADM}\) and \(\mathsf{V}\).

Definition 7

(Witnesses). Witnesses \(\mathcal{W} = \{(i, \mathcal{W}_i)\}_{i=1}^t\), with \(\mathcal{W}_i = \{(m_{i_1},\) \(\mathsf{wit}_{i_1}), \dots , (m_{i_k}, \mathsf{wit}_{i_k})\}\), are derived from set limitations \(\mathsf{V} = \{(i, \mathsf{M}_i)\}_{i=1}^t\), with \(\mathsf{M}_i = \{m_{i_1},\dots , m_{i_k}\}\). Thereby, \(\mathsf{wit}_{i_j}\) attests that its corresponding message block \(m_{i_j}\) is contained in the set \(\mathsf{M}_{i}\).

With , we denote the extraction of the set of witnesses \(\mathcal V\) corresponding to a message \(\mathsf{m}\) from the set \(\mathcal W\).

Definition 8

(Modification Instructions). Modification instructions \(\mathsf{MOD}\), with respect to a message \(\mathsf{m} = (\mathsf{m}_i)_{i=1}^n\), admissible modifications \(\mathsf{ADM}\) and set limitations \(\mathsf{V}\) are represented by a set \(\mathsf{MOD} = \{(i, \mathsf{m}_i')\}_{i=1}^t\) with \(t \le n\), where i refers to the position of the message block in \(\mathsf{m}\), and \(\mathsf{m}_i'\) is the new content for message block \(\mathsf{m}_i\).

With \(\mathsf{MOD} \preceq (\mathsf{ADM}, \mathsf{V})\), we denote that the modification instructions in \(\mathsf{MOD}\) are compatible with \(\mathsf{ADM}\) and \(\mathsf{V}\). Furthermore, with \((\mathsf{m}_0, \mathsf{MOD}_0, \mathsf{ADM}, \mathsf{V}) \equiv (\mathsf{m}_1, \mathsf{MOD}_1,\mathsf{ADM}, \mathsf{V})\), we denote that after applying the changes in \(\mathsf{MOD}_0\) and \(\mathsf{MOD}_1\) to \(\mathsf{m}_0\) and \(\mathsf{m}_1\) respectively, the resulting messages \(\mathsf{m}_0'\) and \(\mathsf{m}_1'\) are identical.

3.1 The Model

An \(\mathsf{ESSS}\) is a tuple of PPT algorithms \((\mathsf{KeyGen}_{\mathsf{sig}},\mathsf{KeyGen}_\mathsf{san},\mathsf{Sign, Sanit, Verify,} \mathsf{Proof}, \mathsf{Judge})\) which are defined as follows:

  • \(\mathsf{KeyGen}_\mathsf{sig}\) ( \(1^\kappa \) ): This algorithm takes as input a security parameter \(\kappa \) and outputs a keypair \((\mathsf{sk}_\mathsf{sig},\) \(\mathsf{pk}_\mathsf{sig})\) for the signer.

  • \(\mathsf{KeyGen}_\mathsf{san}\) ( \(1^\kappa \) ): This algorithm takes as input a security parameter \(\kappa \) and outputs a keypair \((\mathsf{sk}_\mathsf{san},\) \(\mathsf{pk}_\mathsf{san})\) for the sanitizer.

  • Sign ( \(\mathsf{m}, \mathsf{ADM}\), V, (\(\mathsf{sk}_\mathsf{sig}\), \(\mathsf{pk}_\mathsf{sig}\)), \(\mathsf{pk}_\mathsf{san}\) ): This algorithm takes as input a message \(\mathsf{m}\), corresponding admissible modifications \(\mathsf{ADM}\) and set limitations \(\mathsf{V}\), as well as the keypair \((\mathsf{sk}_\mathsf{sig}, \mathsf{pk}_\mathsf{sig})\) of the signer and the verification key \(\mathsf{pk}_\mathsf{san}\) of the sanitizer. It computes the set \(\mathcal W\) from \(\mathsf{V}\), obtains and outputs a signature \(\sigma = (\hat{\sigma }, \mathcal{V})\) together with some auxiliary sanitization information \(\mathsf san = (\mathsf{aux}, \mathcal{W})\) Footnote 5. In case of an error, \(\bot \) is returned. As in [4], we assume that ADM can be recovered from a signature \(\sigma \).

  • Sanit ( \((\mathsf{m}, \sigma ), \mathsf{MOD}, \mathsf{san}, \mathsf{pk}_\mathsf{sig}, \mathsf{sk}_\mathsf{san}\) ):This algorithm takes as input a valid message-signature pair \((\mathsf{m}, \sigma )\), modification instructions \(\mathsf{MOD}\), some auxiliary sanitization information san and the verification key \(\mathsf{pk}_\mathsf{sig}\) of the signer and the signing key \(\mathsf{sk}_\mathsf{san}\) of the sanitizer. It modifies \(\mathsf{m}\) and \(\sigma \) according to \(\mathsf{MOD}\) and outputs an updated message-signature pair \((\mathsf{m}', \sigma ')\) and \(\bot \) if . We assume that \(\mathsf{V}\) can be reconstructed from \(\mathsf{san}\).

  • Verify ( \((\mathsf{m}, \sigma ), \mathsf{pk}_\mathsf{sig}, \mathsf{pk}_\mathsf{san}\) ): This algorithm takes as input a message-signature pair (\(\mathsf{m}\), \(\sigma \)) and the public verification keys of the signer \(\mathsf{pk}_\mathsf{sig}\) and the sanitizer \(\mathsf{pk}_\mathsf{san}\). It returns true if \(\sigma \) is a valid signature on \(\mathsf{m}\) under \(\mathsf{pk}_\mathsf{sig}\) and \(\mathsf{pk}_\mathsf{san}\), and false otherwise.

  • Proof ( \((\mathsf{m}, \sigma ), \{(\mathsf{m}_j, \sigma _j)\}_{j=1}^{q}, (\mathsf{sk}_\mathsf{sig}, \mathsf{pk}_\mathsf{sig}), \mathsf{pk}_\mathsf{san}\) ): This algorithm takes as input a message-signature pair \((\mathsf{m}, \sigma )\), q message-signature pairs \(\{(\mathsf{m}_j, \sigma _j)\}_{j=1}^{q}\) created by the signer, the keypair \((\mathsf{sk}_\mathsf{sig}, \mathsf{pk}_\mathsf{sig})\) of the signer and the public key \(\mathsf{pk}_\mathsf{san}\) of the sanitizer and outputs a proof \(\pi \).

  • Judge ( \((\mathsf{m}, \sigma ), \mathsf{pk}_\mathsf{sig}, \mathsf{pk}_\mathsf{san}, \pi \) ): This algorithm takes as input a message-signature pair \((\mathsf{m}, \sigma )\), the verification keys of the signer \(\mathsf{pk}_\mathsf{sig}\) and the sanitizer \(\mathsf{pk}_\mathsf{san}\) and a proof \(\pi \). It outputs a decision \(d \in \{\mathtt{sig, san}\}\), indicating whether the signature has been produced by the signer or the sanitizer.

3.2 Security Properties

For security, an \(\mathsf{ESSS}\) is required to fulfill the following properties.

Definition 9

(Correctness). An \(\mathsf{ESSS}\) is correct, if

Definition 10

(Unforgeability). An \(\mathsf{ESSS}\) is unforgeable, if for all PPT adversaries \(\mathcal {A}\) there is a negligible function \(\epsilon (\cdot ) \) such that:

figure c

where \(\mathcal {O}^\mathsf{Sign}\), \(\mathcal {O}^\mathsf{Sanit}\) and \(\mathcal {O}^\mathsf{Proof}\) simulate the \(\mathsf{Sign}\), \(\mathsf{Sanit}\) and \(\mathsf{Proof}\) algorithms, respectively. The environment keeps track of the queries to \(\mathcal {O}^\mathsf{Sign}\) using \(L^\mathsf{Sign}\). Furthermore, it maintains a list \(L^\mathsf{Sanit}\) containing the answers of \(\mathcal {O}^\mathsf{Sanit}\) extended with \(\mathsf{pk}_\mathsf{sig}\) and \(\mathsf{ADM}\) from the respective oracle query.

Definition 11

(Immutability). An \(\mathsf{ESSS}\) is immutable, if for all PPT adversaries \(\mathcal {A}\) there is a negligible function \(\epsilon (\cdot )\) such that:

figure d

where the oracles and the environment variables are as in Definition 10.

Definition 12

(Privacy). An \(\mathsf{ESSS}\) is private, if for all PPT adversaries \(\mathcal {A}\) there is a negligible function \(\epsilon (\cdot )\) such that:

figure e

where \(\mathcal {O}^\mathsf{Sign}\), \(\mathcal {O}^\mathsf{Sanit}\) and \(\mathcal {O}^\mathsf{Proof}\) are as in Definition 10. \(\mathcal {O}^\mathsf{LoRSanit}\) is defined as follows:

\(\mathcal {O}^\mathsf{LoRSanit}((\mathsf{m}_0,\mathsf{MOD}_0), (\mathsf{m}_1, \mathsf{MOD}_1), \mathsf {ADM}, (\mathsf{sk}_\mathsf{sig}, \mathsf{pk}_\mathsf{sig}), (\mathsf{sk}_\mathsf{san}, \mathsf{pk}_\mathsf{san}), b)\!\!:\)

  • 1: Randomly choose \(\mathsf {V}\) (compatible with \(\mathsf{MOD}_0\) and \(\mathsf{MOD}_1\)).

  • 2: If \(\mathsf{MOD}_0 \not \preceq (\mathsf{ADM}, \mathsf{V}) ~\vee ~ \mathsf{MOD}_1 \not \preceq (\mathsf{ADM}, \mathsf{V})\), return \(\bot \).

  • 3: If \((\mathsf{m}_0, \mathsf{MOD}_0, \mathsf{ADM}, \mathsf{V}) \not \equiv (\mathsf{m}_1, \mathsf{MOD}_1,\mathsf{ADM}, \mathsf{V})\), return \(\bot \).

  • 4: Compute \((\sigma _b, \mathsf{san}_b) \leftarrow \mathsf{Sign} \mathbf{(}\mathsf{m}_b,\) \(\mathsf{ADM}, \mathsf{V}\), \((\mathsf{sk}_\mathsf{sig}, \mathsf{pk}_\mathsf{sig}),\) \(\mathsf{pk}_\mathsf{san}\mathbf{)}\).

  • 5: Return \((\mathsf{m}_b', \sigma _b') \leftarrow \mathsf{Sanit} \mathbf{(}(\mathsf{m}_b, \sigma _b),\) \(\mathsf{MOD}_b, \mathsf{san}_b, \mathsf{pk}_\mathsf{sig}, \mathsf{sk}_\mathsf{san}\mathbf{)}\).

Observe that since \(\mathsf{V}\) is internally chosen (and, thus, independent of the bit b) in \(\mathcal {O}^\mathsf{LoRSanit}\), privacy holds independent of the adversaries capability to reconstruct the set limitations. Clearly, this contradicts a definition of privacy in a sense that sanitized signatures do not reveal the original message.

Definition 13

(Transparency). An \(\mathsf{ESSS}\) is transparent, if for all PPT adversaries \(\mathcal {A}\) there is a negligible function \(\epsilon (\cdot )\) such that:

figure f

where \(\mathcal {O}^\mathsf{Sign}\), \(\mathcal {O}^\mathsf{Sanit}\) and \(\mathcal {O}^\mathsf{Proof}\) are as in Definition 10. In addition, \(\mathcal {O}^\mathsf{Proof}\) does not respond to queries for messages-signature pairs obtained using \(\mathcal {O}^\mathsf{Sanit/Sign}\). \(\mathcal {O}^\mathsf{Sanit/Sign}\) is defined as follows:

\(\mathcal {O}^\mathsf{Sanit/Sign}(\mathsf {m}, \mathsf {ADM}, \mathsf {V}, \mathsf {MOD}, (\mathsf{sk}_\mathsf{sig}, \mathsf{pk}_\mathsf{sig}), (\mathsf{sk}_\mathsf{san}, \mathsf{pk}_\mathsf{san}), b)\):

  • 1: If \(\mathsf{MOD} \not \preceq (\mathsf{ADM}, \mathsf{V})\), return \(\bot \).

  • 2: Compute \((\sigma , \mathsf{san}) \leftarrow \mathsf{Sign} \mathbf{(}\mathsf{m},\) \(\mathsf{ADM}, \mathsf{V}\), \((\mathsf{sk}_\mathsf{sig}, \mathsf{pk}_\mathsf{sig}),\) \(\mathsf{pk}_\mathsf{san}\mathbf{)}\).

  • 3: Compute \((\mathsf{m}', \sigma _0) \leftarrow \mathsf{Sanit} \mathbf{(}(\mathsf{m}, \sigma ),\) \(\mathsf{MOD}, \mathsf{san}, \mathsf{pk}_\mathsf{sig}, \mathsf{sk}_\mathsf{san}\mathbf{)}\).

  • 4: Compute \((\sigma _1, \mathsf{san}) \leftarrow \mathsf{Sign} \mathbf{(}\mathsf{m}', \mathsf{ADM}, \mathsf{V}, (\mathsf{sk}_\mathsf{sig}, \mathsf{pk}_\mathsf{sig}), \mathsf{pk}_\mathsf{san}\mathbf{)}\).

  • 5: Return \((\mathsf {m}', \sigma _b)\).

Proof-Restricted Transparency [6]: \(\mathcal {O}^\mathsf{Proof}\) does not answer queries for messages returned by \(\mathcal {O}^\mathsf{Sanit/Sign}\). In the proof for the implication of privacy by transparency [4], \(\mathcal {O}^\mathsf{Sanit/Sign}\) is used to simulate the \(\mathcal {O}^\mathsf{LoRSanit}\) queries. Thus, note that the implication only holds if the privacy-adversary is restricted to \(\mathcal {O}^\mathsf{Proof}\) queries for messages which do not originate from \(\mathcal {O}^\mathsf{LoRSanit}\). To additionally rule out even stronger adversaries against privacy, i.e., such that privacy also holds after seeing proofs for the messages in question, one would need to prove privacy directly.

Definition 14

(Sanitizer-Accountability). An \(\mathsf{ESSS}\) is sanitizer-accountable, if for all PPT adversaries \(\mathcal {A}\) there is a negligible function \(\epsilon (\cdot )\) such that:

where the oracles are as in Definition 10. The environment maintains a list \(\mathtt{SIG}\), containing all message-signature tuples obtained from \(\mathcal {O}^\mathsf{Sign}\).

Definition 15

(Signer-Accountability). An \(\mathsf{ESSS}\) is signer-accountable, if for all PPT adversaries \(\mathcal {A}\) there is a negligible function \(\epsilon (\cdot )\) such that:

where \(\mathcal {O}^\mathsf{Sanit}\) as well as \(L^\mathsf{Sanit}\) are as in Definition 10.

4 Rethinking Privacy for \(\mathsf{ESSS}\)

In the following, we consider alternatives to the standard privacy property, i.e., (strong) unlinkability, and finally come up with a notion denoted as strong privacy which captures privacy for \(\mathsf{ESSS}\) in the original sense of sanitizable signatures.

4.1 Revisiting Unlinkability

The notion of unlinkability for sanitizable signatures has been introduced in [6] as a stronger notion of privacy (which implies the usual privacy property). In [8], an even stronger notion, i.e., strong unlinkability, has been proposed. It requires that unlinkability must even hold for signers. The notions defined in [6, 8] can easily be adapted to the model for \(\mathsf{ESSS}\) and we do so below.

Definition 16

(Unlinkability). An \(\mathsf{ESSS}\) is unlinkable, if for all PPT adversaries \(\mathcal {A}\) there is a negligible function \(\epsilon (\cdot )\) such that:

figure g

where \(\mathcal {O}^\mathsf{Sign}\), \(\mathcal {O}^\mathsf{Sanit}\) and \(\mathcal {O}^\mathsf{Proof}\) are as in Definition 10 and \(\mathcal {O}^\mathsf{LoRSanit}\) operates as follows:

\(\mathcal {O}^\mathsf{LoRSanit}((\mathsf{m}_0,\mathsf{MOD}_0, \mathsf{san}_0, \sigma _0), (\mathsf{m}_1, \mathsf{MOD}_1, \mathsf{san}_1, \sigma _1), \mathsf {ADM}, (\mathsf{sk}_\mathsf{sig}, \mathsf{pk}_\mathsf{sig}),\) 

\((\mathsf{sk}_\mathsf{san}, \mathsf{pk}_\mathsf{san}), b)\):

  • 1: If \(\mathsf{MOD}_0 \not \preceq (\mathsf{ADM}, \mathsf{V_0}) ~\vee ~ \mathsf{MOD}_1 \not \preceq (\mathsf{ADM}, \mathsf{V_1})\), return \(\bot \).

  • 2: If \((\mathsf{m}_0, \mathsf{MOD}_0, \mathsf{ADM}, \mathsf{V}_0) \not \equiv (\mathsf{m}_1, \mathsf{MOD}_1,\mathsf{ADM}, \mathsf{V}_1)\), return \(\bot \).

  • 3: If for any \(i \in \{0,1\}\), \(\mathsf{{Verify}}((\mathsf{{m}}_i, \sigma _i), \mathsf{{pk}}_\mathsf{{sig}}, \mathsf{{pk}}_\mathsf{{san}}) = \mathtt{{false}}\), return \(\bot \)

  • 4: Return \((\mathsf{m}_b', \sigma _b') \leftarrow \mathsf{Sanit} \mathbf{(}(\mathsf{m}_b, \sigma _b),\) \(\mathsf{MOD}_b, \mathsf{san}_b, \mathsf{pk}_\mathsf{sig}, \mathsf{sk}_\mathsf{san}\mathbf{)}\).

Note that \(\mathsf{V}_0\) and \(\mathsf{V}_1\) can be reconstructed from \(\mathsf{san}_0\) and \(\mathsf{san}_1\), respectively. Furthermore, note that for answers from the oracle \(\mathcal {O}^\mathsf{LoRSanit}\), the oracle \(\mathcal {O}^\mathsf{Sanit}\) is restricted to queries for modifications which are covered by both set limitations \(\mathsf{V}_0\) and \(\mathsf{V}_1\), which were initially submitted to \(\mathcal {O}^\mathsf{LoRSanit}\).

Definition 17

(Strong Unlinkability). An \(\mathsf{ESSS}\) is strongly unlinkable, if for all PPT adversaries \(\mathcal {A}\) there is a negligible function \(\epsilon (\cdot )\) such that:

figure h

where the oracles are as in Definition 16, except that \(\mathcal {A}\) controls \((\mathsf{sk}_\mathsf{sig}, \mathsf{pk}_\mathsf{sig})\).

While (strong) unlinkability covers privacy for the \(\mathtt{LimitSet}\) extension in the original sense of privacyFootnote 6, it seems very hard to construct efficient (strongly) unlinkable schemes that support the LimitSet extension. Unfortunately, it is not possible to simply extend existing (strongly) unlinkable constructions [6, 8, 17] by the LimitSet extension. To illustrate why, we revisit the design principle of such schemes. Here, upon \(\mathsf{Sign}\), the signer issues two signatures. The first signature, \(\sigma _{{FIX}}\), only covers the fixed message blocks and the public key of the sanitizer, whereas the second signature, \(\sigma _{{FULL}}\), covers the whole message together with the public key of the signer (and the public key of the sanitizer [8]). Upon \(\mathsf{Sanit}\), the sanitizer simply issues a new signature \(\sigma _{{FULL}}\), whereas the signature \(\sigma _{{FIX}}\) remains unchanged. Finally, upon \(\mathsf{Verify}\), one verifies whether \(\sigma _{{FIX}}\) is valid under \(\mathsf{pk}_\mathsf{sig}\) and \(\sigma _{{FULL}}\) is either valid under \(\mathsf{pk}_\mathsf{sig}\) or \(\mathsf{pk}_\mathsf{san}\) for a given message \(\mathsf{m}\) and \(\mathsf{ADM}\). Thereby, the signature scheme used for \(\sigma _{{FIX}}\) is a deterministic signature scheme, while the scheme used for \(\sigma _{{FULL}}\) can either also be a deterministic signature scheme [8], a group/ring signature scheme [6], or a signature scheme with rerandomizable keys [17].

When extending these schemes to also support the LimitSet extension, it is clear that the set limitations need to be fixed by the signer and must not be modifiable by the sanitizer. One simple way to realize the LimitSet extension would be to additionally include some unique encoding of the limitations \(\mathsf{V}\) in \(\sigma _{{FIX}}\) and check whether the message is consistent with the defined limitations upon Verify. Obviously, this extension does not influence unforgeability and immutability and the scheme is still (publicly) accountable. Furthermore also privacy holds, since the set limitations which are included in the challenge tuple in the privacy game are randomly chosen inside the \(\mathcal {O}^\mathsf{LoRSanit}\) oracle. However, unlinkability can not hold for the following reason: When querying the oracle \(\mathcal {O}^\mathsf{LoRSanit}\) in the unlinkability game, the adversary can choose set limitations \(\mathsf{V}_0\) and \(\mathsf{V}_1\) such that \(\mathsf{MOD}_0 \preceq (\mathsf{ADM},\) \(\mathsf{V}_0)\), \(\mathsf{MOD}_1 \preceq (\mathsf{ADM}, \mathsf{V}_1)\) and \((\mathsf{m}_0, \mathsf{MOD}_0,\) \(\mathsf{ADM}, \mathsf{V}_0) \equiv (\mathsf{m}_1, \mathsf{MOD}_1,\) \(\mathsf{ADM},\) \(\mathsf{V}_1)\), but \(\mathsf{V}_0 \ne \mathsf{V}_1\). For the corresponding signatures \(\sigma _0 = (\sigma _{{{FIX}}_0}, \sigma _{{{FULL}}_0})\), \(\sigma _1 = (\sigma _{{{FIX}}_1},\) \(\sigma _{{{FULL}}_1})\) submitted to the oracle, this means that \(\sigma _{{{FIX}}_0} \ne \sigma _{{{FIX}}_1}\) which yields a trivial distinguisher for the unlinkability game.

As an alternative, one may think of separately signing each message contained in the limited sets (using a deterministic signature scheme), where only the signatures corresponding to the chosen messages are revealed. However, to prevent forgeries where message blocks are re-used in other signatures (i.e., mix-and-match like attacks [5]), it would be required to also include some message-specific identifier in each signature. Again, it is easy to see that this would provide a trivial distinguisher for the (strong) unlinkability game.

Clearly, the requirement that the limited sets are fixed by the signer and cannot be modified later is not only specific to the aforementioned constructions, but is inherent to all constructions of such schemes. To circumvent the aforementioned issues, one could make use of more sophisticated primitives, which, however, come at the cost of significant computational overhead and complexity of the scheme. This is confirmed by the only known unlinkable construction supporting LimitSet [12]. It is computationally very expensive due to a high number of bilinear map applications and the use of non-interactive zero-knowledge proofs of knowledge in the computationally expensive target group of the bilinear map. Moreover, it is proven secure only in a model which does not consider all security requirements of sanitizable signatures (as it is tailored to their black-box construction of anonymous credentials) and thus does not represent an \(\mathsf{ESSS}\).

4.2 A Strengthened Notion for Privacy

Surprisingly, our requirement that the set limitations remain concealed can be met by a simple extension of the conventional privacy property. We call the extended property strong privacy Footnote 7. As we will see, this modification allows to obtain efficient implementations from secure existing ones in a black-box fashion. We modify the privacy game such that the set limitations in \(\mathcal {O}^\mathsf{LoRSanit}\) can be submitted per message, i.e., \(\mathcal {O}^\mathsf{LoRSanit}\) takes \((\mathsf{m}_0,\mathsf{MOD}_0, \mathsf{V}_0), (\mathsf{m}_1,\) \(\mathsf{MOD}_1,\) \(\mathsf{V}_1),\) \(\mathsf{ADM}\). This means that \(\mathsf{V}_0\) and \(\mathsf{V}_1\) can be different and only need an overlap such that after applying \(\mathsf{MOD}_0\) and \(\mathsf{MOD}_1\) the messages \(\mathsf{m}_0'\) and \(\mathsf{m}_1'\) are identical. More formally, the game is defined as follows:

Definition 18

(Strong Privacy). An \(\mathsf{ESSS}\) is strongly private, if for all PPT adversaries \(\mathcal {A}\) there is a negligible function \(\epsilon (\cdot )\) such that:

figure i

where the oracles \(\mathcal {O}^\mathsf{Sign}\), \(\mathcal {O}^\mathsf{Sanit}\) and \(\mathcal {O}^\mathsf{Proof}\) are defined as in Definition 10. The oracle \(\mathcal {O}^\mathsf{LoRSanit}\) is defined as follows:

\(\mathcal {O}^\mathsf{LoRSanit}((\mathsf{m}_0,\mathsf{MOD}_0, \mathsf {V}_0), (\mathsf{m}_1, \mathsf{MOD}_1, \mathsf {V}_1), \mathsf {ADM}, (\mathsf{sk}_\mathsf{sig}, \mathsf{pk}_\mathsf{sig}), (\mathsf{sk}_\mathsf{san}, \mathsf{pk}_\mathsf{san}), b)\!\!:\)

  • 1: If \(\mathsf{MOD}_0 \not \preceq (\mathsf{ADM}, \mathsf{V}_0) ~\vee ~ \mathsf{MOD}_1 \not \preceq (\mathsf{ADM}, \mathsf{V}_1)\), return \(\bot \).

  • 2: If \((\mathsf{m}_0, \mathsf{MOD}_0, \mathsf{ADM}, \mathsf{V}_0) \not \equiv (\mathsf{m}_1, \mathsf{MOD}_1,\mathsf{ADM}, \mathsf{V}_1)\), return \(\bot \).

  • 3: Compute \((\sigma _b, \mathsf{san}_b) \leftarrow \mathsf{Sign} \mathbf{(}\mathsf{m}_b,\) \(\mathsf{ADM}, \mathsf{V}_b\), \((\mathsf{sk}_\mathsf{sig}, \mathsf{pk}_\mathsf{sig}),\) \(\mathsf{pk}_\mathsf{san}\mathbf{)}\).

  • 4: Return \((\mathsf{m}_b', \sigma _b') \leftarrow \mathsf{Sanit} \mathbf{(}(\mathsf{m}_b, \sigma _b),\) \(\mathsf{MOD}_b, \mathsf{san}_b, \mathsf{pk}_\mathsf{sig}, \mathsf{sk}_\mathsf{san}\mathbf{)}\).

Note that for answers from the oracle \(\mathcal {O}^\mathsf{LoRSanit}\), the oracle \(\mathcal {O}^\mathsf{Sanit}\) is restricted to queries for modifications which are covered by both set limitations \(\mathsf{V}_0\) and \(\mathsf{V}_1\), which were initially submitted to \(\mathcal {O}^\mathsf{LoRSanit}\).

Theorem 1

Privacy is strictly weaker than strong privacy, while (strong) unlinkability is strictly stronger than strong privacy.

As mentioned in [9], the extension of the model regarding LimitSet does not influence the relations of the properties shown in [4]. That is, unforgeability is implied by accountability, (proof-restricted) privacy is implied by (proof-restricted) transparency and immutability is still independent of the other properties. What remains for the proof of Theorem 1 is to unveil the relations of strong privacy to the other privacy related notions. We subsequently prove a number of lemmas to finally obtain the desired result.

Lemma 1

Not every transparent \(\mathsf{ESSS}\) is strongly private.

We prove Lemma 1 by counterexample.

Proof

Let us consider an instantiation of Scheme 1 with a correct, unforgeable, immutable, private, (proof-restricted) transparent and accountable sanitizable signature scheme. Further, assume that the accumulator scheme is distinguishable. Then, an adversary against the indistinguishability implies an adversary against strong privacy.\(\quad \square \)

From this proof, we can straight forwardly derive:

Corollary 1

Not every private \(\mathsf{ESSS}\) is strongly private.

To show that strong privacy is a strictly stronger notion than privacy, we additionally need to show that the following lemma holds.

Lemma 2

Every strongly private \(\mathsf{ESSS}\) is also private.

To prove this, we show that we can construct an efficient adversary \(\mathcal {A}^\mathsf{SP}\) against strong privacy using an efficient adversary \(\mathcal {A}^\mathsf{P}\) against privacy.

Proof

\(\mathcal {A}^\mathsf{SP}\) simply forwards the calls to the oracles \(\mathcal {O}^\mathsf{Sign}, \mathcal {O}^\mathsf{Sanit}, \mathcal {O}^\mathsf{Proof}\), whereas the oracle \(\mathcal {O}^\mathsf{LoRSanit}\) is simulated as follows: Upon every query \((\mathsf{m}_0,\) \(\mathsf{MOD}_0)\), \((\mathsf{m}_1, \mathsf{MOD}_1)\), \(\mathsf ADM\) of \(\mathcal {A}^\mathsf{P}\), \(\mathcal {A}^\mathsf{SP}\) internally chooses random set limitations \(\mathsf V\) such that \(\mathsf{MOD}_0 \preceq (\mathsf{ADM},\) \(\mathsf{V})\), \(\mathsf{MOD}_1 \preceq (\mathsf{ADM}, \mathsf{V})\). Then \(\mathcal {A}^\mathsf{SP}\) forwards the query \((\mathsf{m}_0,\) \(\mathsf{MOD}_0, \mathsf{V})\), \((\mathsf{m}_1, \mathsf{MOD}_1,\) \(\mathsf{V})\), \(\mathsf ADM\) to its own \(\mathcal {O}^\mathsf{LoRSanit}\) oracle and returns the result to \(\mathcal {A}^\mathsf{P}\). Eventually, \(\mathcal {A}^\mathsf{P}\) outputs a bit b which is forwarded by \(\mathcal {A}^\mathsf{SP}\). It is easy to see that the winning probability of \(\mathcal {A}^\mathsf{SP}\) is identical to that of \(\mathcal {A}^\mathsf{P}\). \(\quad \square \)

Subsequently, we show that unlinkability is strictly stronger than strong privacy.

Lemma 3

Not every strongly private \(\mathsf{ESSS}\) is (strongly) unlinkable.

We prove Lemma 3 by counterexample.

Proof

Let us consider an instantiation of Scheme 1 with a correct, unforgeable, immutable, private, (proof-restricted) transparent and accountable sanitizable signature scheme which does not fulfill unlinkability. By Theorem 3, we can extend it to be strongly private by using an indistinguishable accumulator.\(\quad \square \)

Lemma 4

Every unlinkable \(\mathsf{ESSS}\) is also strongly private.

To prove Lemma 4, we show that we can construct an efficient adversary \(\mathcal {A}^\mathsf{U}\) against unlinkability using an efficient adversary \(\mathcal {A}^\mathsf{SP}\) against strong privacy.

Proof

Likewise to the proof of Lemma 2, \(\mathcal {A}^\mathsf{U}\) simply forwards the calls to the oracles \(\mathcal {O}^\mathsf{Sign}, \mathcal {O}^\mathsf{Sanit}, \mathcal {O}^\mathsf{Proof}\), whereas the oracle \(\mathcal {O}^\mathsf{LoRSanit}\) is simulated as follows: Upon every query \((\mathsf{m}_0,\) \(\mathsf{MOD}_0, \mathsf{V}_0)\), \((\mathsf{m}_1, \mathsf{MOD}_1, \mathsf{V}_1)\), \(\mathsf ADM\) of \(\mathcal {A}^\mathsf{SP}\), \(\mathcal {A}^\mathsf{U}\) obtains \((\sigma _0, \mathsf{san}_0) \leftarrow \mathsf{\mathcal {O}^{Sign}} \mathbf{(}\mathsf{m}_0, \mathsf{ADM}, \mathsf{V}_0\mathbf{)}\), \((\sigma _1, \mathsf{san}_1) \leftarrow \mathsf{\mathcal {O}^{Sign}} \mathbf{(}\mathsf{m}_1, \mathsf{ADM}, \mathsf{V}_1\mathbf{)}\) using its own \(\mathcal {O}^\mathsf{Sign}\) oracle. Then \(\mathcal {A}^\mathsf{U}\) forwards the query \((\mathsf{m}_0,\) \(\mathsf{MOD}_0, \mathsf{san}_0, \sigma _0)\), \((\mathsf{m}_1, \mathsf{MOD}_1, \mathsf{san}_1,\) \(\sigma _1)\), \(\mathsf{ADM}\) to its own \(\mathcal {O}^\mathsf{LoRSanit}\) oracle and returns the result to \(\mathcal {A}^\mathsf{SP}\). Eventually, \(\mathcal {A}^\mathsf{SP}\) outputs a bit b which is forwarded by \(\mathcal {A}^\mathsf{U}\). It is easy to see that the winning probability of \(\mathcal {A}^\mathsf{U}\) is identical to that of \(\mathcal {A}^\mathsf{SP}\). \(\quad \square \)

Taking all the above results together, Theorem 1 follows.

5 Black-Box Extension of Sanitizable Signatures

Provably secure existing constructions of \(\mathsf{ESSS}\) build up on concrete existing sanitizable signature schemes. As it turns out, we can even obtain a more general result, i.e., we obtain an \(\mathsf{ESSS}\) that only makes black-box use of sanitizable signatures in the model of [4, 18] and secure accumulators. The so obtained black-box construction of an \(\mathsf{ESSS}\) then fulfills all the security notions of the underlying sanitizable signature scheme.

Before we continue, we recall the general paradigm for instantiating LimitSet (cf. [9, 21]).

Paradigm 1

For each \(\mathtt{LimitSet}\) block, use a secure accumulator \(\mathsf{ACC}\) to accumulate the set of admissible replacements. The respective message blocks are then replaced with the corresponding accumulator value, i.e., the accumulators are included in the same way as fixed message blocks. Conversely, the actually chosen message blocks for each \(\mathtt{LimitSet}\) block are included in the same way as variable message blocks (since they change on every sanitization). Finally, the signature is augmented by the witnesses corresponding to the actual message blocks, while the remaining witnesses are only known to the signer and the sanitizer.

We introduce our generic construction (that follows Paradigm 1) in Scheme 1, where we use \((\mathbf{KeyGen}_\mathsf{sig},\) \(\mathbf{KeyGen}_\mathsf{san}, \mathbf{Sign},\) \(\mathbf{Sanit}, \mathbf{Verify}, \mathbf{Proof}, \mathbf{Judge})\) to denote the algorithms of the underlying sanitizable signature scheme. We define two operators \(\phi \) and \(\psi \) to manipulate sets \(S = \{(k_1, v_1), \dots , (k_n, v_n)\}\) of key-value pairs. Thereby, we assume the keys \(k_1, \dots , k_n\) to be unique. The operator \(\phi (\cdot , \cdot )\) takes a key k and a set S, obtains the tuple \((k_i, v_i)\) with \(k = k_i\) from S, and returns \(v_i\). If no such tuple exists, \(\bot \) is returned. Similarly, the operator \(\psi (\cdot , \cdot , \cdot )\), takes a key k, a value \(v_i'\) and a set S and obtains the tuple \((k_i, v_i)\) with \(k = k_i\) from S. It returns \((S \setminus \{(k_i, v_i)\}) \cup \{(k_i, v_i')\}\) and \(\bot \) if no such tuple exists.

We will prove the security of Scheme 1 using similar arguments as in [9], but relying on the abstract model of [4, 18], instead of specific properties of the used sanitizable signature scheme.

figure j

Theorem 2

When instantiating Scheme 1 with a sanitizable signature scheme that provides security properties \(\varSigma \) in the model of [4, 18] and a secure accumulator scheme, one obtains an \(\mathsf{ESSS}\) that provides security properties \(\varSigma \).

We prove Theorem 2 in the extended version of this paper. Furthermore, we emphasize that—while our model includes the extensions regarding ADM from [18]—the proof does not rely on these extensions. This means that our black-box construction is also applicable to schemes in the model of [4].

Now, we discuss some observations related to the instantiation of the LimitSet extension using accumulators. As discussed in the previous section, it seems to be hard to design generic extensions that also preserve unlinkability [6, 8]. Furthermore, the abstract model does not consider the signer as an adversary, which gives some freedom regarding the implementations of certain algorithms and the choice of the accumulator scheme. As mentioned in Sect. 2.1, the abstract model of accumulators assumes a trusted setup. It is, however, beneficial that the signer runs the \(\mathsf{AGen}\) algorithm to be able to perform more efficient updates using the trapdoor. As a side effect, this also means that the signer is later able to extend the limited sets by making use of the trapdoor in the fashion of [25]. If this feature is unwanted, a TTP can run the \(\mathsf{AGen}\) algorithm and publish \(\mathsf{pk}_\mathsf{acc}\) as a common reference string.

5.1 Obtaining Strong Privacy via a Black-Box Construction

Now we show how strongly private \(\mathsf{ESSS}\) can be constructed from private sanitizable signature schemes in a black-box fashion. Basically, this can be achieved by applying the conversion in Scheme 1 and instantiating LimitSet using an accumulator that provides the indistinguishability property.

Theorem 3

Let \(\mathsf{ESSS}\) obtained using Scheme 1 be private and (AGen, AEval, AWitCreate, AVerify) be an indistinguishable accumulator, then \(\mathsf{ESSS}\) is strongly private.

Proof

We prove the theorem above by using a sequence of games. Thereby, we denote the event that the adversary wins Game i by \(S_i\).

  • Game 0: The original strong privacy game.

  • Game 1: As in the original game, but we modify the oracle \(\mathcal {O}^\mathsf{LoRSanit}\) to firstly compute \(\mathsf{V} \leftarrow \mathsf{V}_0 \cap \mathsf{V}_1\) and to set \(\mathsf{V}_0 \leftarrow \mathsf{V}\), \(\mathsf{V}_1 \leftarrow \mathsf{V}\).

Transition Game 0 \(\rightarrow \) Game 1: A distinguisher between Game 0 and Game 1 is a distinguisher for the indistinguishability game of the accumulator.

In Game 1, the signatures are computed with respect to \(\mathsf{V}_0 \cap \mathsf{V}_1\) in \(\mathcal {O}^\mathsf{LoRSanit}\). This means that the LimitSet related values are independent of the bit b (similar as when randomly choosing \(\mathsf{V}\)). Thus, from the adversary’s viewpoint, Game 1 is equivalent to the conventional privacy game, meaning that \(\mathsf{Pr}\left[ S_1\right] \le \frac{1}{2} + \epsilon _\mathsf{priv}(\kappa )\). Furthermore, we know that the distinguishing probability between Game 0 and Game 1 is equivalent to the indistinguishability advantage of the accumulators, i.e., \(|\mathsf{Pr}\left[ S_0\right] - \mathsf{Pr}\left[ S_1\right] | = k \cdot \epsilon _\mathsf{ind}(\kappa )\), where k is the number of LimitSet blocks.Footnote 8 In further consequence, this shows that the advantage of an adversary to win the strong privacy game is negligible and bounded by \(\mathsf{Pr}\left[ S_0\right] \le \frac{1}{2} + \epsilon _\mathsf{priv}(\kappa ) + k \cdot \epsilon _\mathsf{ind}(\kappa )\).\(\quad \square \)

We also note that it might be an option to use \(\mathsf{cfw}\)-indistinguishable accumulators instead of indistinguishable accumulators if the accumulation domain is large enough that the chosen random value \(x_r\) can not be efficiently guessed. This would resemble the suggestion of [21], who informally mentioned that additionally accumulating a random value might prevent the adversary from guessing the set limitations.

6 Conclusion

In this paper we propose the notion of strong privacy for \(\mathsf{ESSS}\), which, in contrast to the privacy notion for \(\mathsf{ESSS}\) of [9] covers privacy for the \(\mathtt{LimitSet}\) extension in the original sense of sanitizable signatures. From a practical perspective, our black-box constructions nicely combine with existing schemes in the model of [4, 18]. Thus, existing implementations of schemes in these models directly yield a basis to instantiate our proposed extensions with relatively low effort, while preserving the efficiency of the underlying schemes. Conversely, it is still an open issue to construct efficient (strongly) unlinkable \(\mathsf{ESSS}\) or to come up with a generic extension to construct such schemes from existing unlinkable sanitizable signature schemes.