Keywords

1 Introduction

Sanitizable signature schemes enable the signer of a document to declare certain sections of the message as admissible for modification, so that another designated party (the sanitizer) can modify them and update the signature without affecting the authenticity and integrity of the immutable parts. The main motivation is to balance out the verifier’s wish to check authenticity of parts of the original document and the signer’s desire for privacy of the sanitized data. The idea of sanitizable signature schemes dates back to a work by Ateniese et al. [2].

In [2], the authors introduced several security properties for sanitizable signature schemes. Besides unforgeability against outsiders, a desirable property is immutability, which demands that even a malicious sanitizer can only alter admissible parts. Privacy asks that one cannot reconstruct the original document given only the sanitized version and signature, and its strengthening called unlinkability [7] says that one cannot determine the origin to a sanitized document among several known possibilities. Signer and sanitizer accountability ensure that in case of a dispute the parties can give a convincing proof of who created a signature, the signer or the sanitizer. A less common property is transparency, which should hide who created a signature, in case neither of the parties helps to determine the originator—this stands in contrast to public accountability, where no additional help is required to determine who signed the document.

1.1 Invisible Sanitizable Signatures

Recently, Camenisch et al. [10] formalized the notion of invisibility of sanitizable signatures. This property, formerly called strong transparency in [2], should hide which modifications a sanitizer is allowed to perform. In previous constructions the description of admissible operations, denoted \(\mathrm {ADM}\), had usually been attached in clear to the signature. Gong et al. [25] were the first to argue that this information can be of value, and later Camenisch et al. showed that hiding it may be a desirable goal. They also revised the theoretical framework of sanitizable signatures in order to capture the invisibility property, and gave constructions achieving it based on a new type of chameleon hash functions with ephemeral trapdoors. Soon after, Beck et al. [3] further strengthened the notion of invisibility.

In its basic form, invisibility protects against leakage of \(\mathrm {ADM}\) if the sanitizer public key is only used in connection with a single signer. In applications this means that the sanitizer must create a fresh key pair for each user. Strong invisibility, on the other hand, allows to use the same sanitizer key pair with multiple signers. Beck et al. use unique signatures, \(\mathsf {IND{-}}\mathsf {CCA2}\)-secure encryption, and collision-resistant chameleon hash functions to achieve strong invisibility.

Technically, the difference between the two invisibility notions lies in the capabilities of an adversary trying to establish which of two potential operation sets, \(\mathrm {ADM}_0\) or \(\mathrm {ADM}_1\), has been encoded as admissible into the signature. Given a challenge signature, the adversary may query a sanitizing oracle on it as long as the requested modification does not allow to distinguish the two cases trivially (this happens e.g. if the modification is admissible for one of the two sets but not for the other). For the basic invisibility notion the adversary can ask for sanitizations only in connection with the public key \(\mathsf {pk}_\mathsf {Sig}\) of the genuine signer. In the stronger notion, the adversary can also request sanitizations of messages signed with other, possibly maliciously chosen signer keys \(\mathsf {pk}_\mathsf {Sig}'\).

1.2 Our Contributions

In this work we show that invisible sanitizable signature schemes and public-key encryption schemes are equivalent. Our equivalence proof consists of four parts.

Invisibility Implies \(\mathsf {IND{-}}\mathsf {CPA}\)-Secure Encryption. Our first result is to show that an invisible sanitizable signature scheme yields an \(\mathsf {IND{-}}\mathsf {CPA}\)-secure bit-encryption scheme. An invisible scheme hides the actual admissible operations for a signature; we can use this property to securely embed a message bit b by using one of two fixed and distinct admissible operation descriptions (\(\mathrm {ADM}_0\) or \(\mathrm {ADM}_1\)) to build a signature \(\sigma \) under a fresh signer key pair. The ciphertext consists of the signature \(\sigma \) and the signer public key \(\mathsf {pk}_\mathsf {Sig}\). Invisibility now guarantees that no outsider is able to distinguish the two cases.

The trapdoor information for decryption is the sanitizer secret key; his public key acts as the public key of the encryption scheme. With his secret key, the sanitizer can run the sanitization process and check via a distinguishing modification which operation \(\mathrm {ADM}_b\) has been embedded: Only the admissible one (\(\mathrm {ADM}_b\)) will result in a valid new signature. For the other operation (\(\mathrm {ADM}_{1-b}\)), the modification should fail by the immutability property of the sanitizable scheme. Note that we obviously need some other security property besides invisibility, because it is easy to devise invisible signature schemes without any other security property, e.g. with constant signatures.

Strong Invisibility Implies \(\mathsf {IND{-}}\mathsf {CCA2}\)-Secure Encryption. While the construction of an \(\mathsf {IND{-}}\mathsf {CPA}\)-secure scheme via the embedding of the hidden \(\mathrm {ADM}\) may be expected, we argue next that the same construction yields an \(\mathsf {IND{-}}\mathsf {CCA2}\)-secure encryption scheme if the underlying sanitizable signature scheme is strongly invisible. This result is less conventional, since it links the sanitization for different signer keys with the ability to securely decrypt different ciphertexts.

The proof idea is to note that ciphertexts in our encryption system are of the form \((\sigma ,\mathsf {pk}_\mathsf {Sig})\). Given a challenge ciphertext \((\sigma ,\mathsf {pk}_\mathsf {Sig})\), recall that for \(\mathsf {IND{-}}\mathsf {CCA2}\)-security we must allow for oracle decryptions of ciphertexts \((\sigma ',\mathsf {pk}_\mathsf {Sig}')\ne (\sigma ,\mathsf {pk}_\mathsf {Sig})\). Since decryption is performed via sanitization, and strong invisibility allows us to call the sanitizer for different keys \(\mathsf {pk}_\mathsf {Sig}'\), we can easily decrypt ciphertexts of the form \((\sigma ',\mathsf {pk}_\mathsf {Sig}')\) with \(\mathsf {pk}_\mathsf {Sig}'\ne \mathsf {pk}_\mathsf {Sig}\). To handle ciphertexts \((\sigma ',\mathsf {pk}_\mathsf {Sig})\) under the original signer key we rely on the strong unforgeability property of the signature scheme: it says that one cannot create fresh signatures \(\sigma '\) under \(\mathsf {pk}_\mathsf {Sig}\), and therefore an \(\mathsf {IND{-}}\mathsf {CCA2}\)-adversary cannot submit valid oracle queries of this form.

In a sense, this result warrants the deployment of an \(\mathsf {IND{-}}\mathsf {CCA2}\)-secure encryption scheme in the strongly invisible construction of Beck et al. [3]: Any strongly invisible sanitizable signature scheme already implies \(\mathsf {IND{-}}\mathsf {CCA2}\)-secure encryption systems. Note that we construct an \(\mathsf {IND{-}}\mathsf {CCA2}\)-secure bit encryption scheme, but this is sufficient to derive an \(\mathsf {IND{-}}\mathsf {CCA2}\)-secure string encryption scheme [14, 26, 31, 32].

\(\mathsf {IND{-}}\mathsf {CPA}\)-Secure Encryption Implies Invisibility. Next we establish the converse implication, i.e. from \(\mathsf {IND{-}}\mathsf {CPA}\)-secure public-key encryption schemes to invisible sanitizable signatures. Note that the existence of the former primitive also implies the existence of one-way functions (the argument is identical to the one in [35, Lemma 1]), and thus of secure digital signature schemes [33, 35], so that we can use this building block in our construction as well. Besides invisibility, the derived sanitizable signature scheme has all the common properties, like unforgeablility, immutability, privacy, and accountability.

The construction idea is to have the signer sign every message block of the message with a different, ephemeral key, and then to protect this tuple of signatures with an additional signature under his secret key. This “message” part of the signature ensures unforgeability, privacy, and accountability. To enable the sanitizer to modify the admissible blocks, the signer appends another “administrative” signature over the description \(\mathrm {ADM}\) and the tuple of secret keys used to sign the admissible blocks, both encrypted under the sanitizer public encryption key to allow for invisibility. If some admissible block has to be modified, the sanitizer can retrieve the corresponding ephemeral key via decryption, change the message block and then update the relevant signatures in the “message” part. Immutability (i.e., protection against inadmissible modifications from a malicious sanitizer) then follows from the unforgeability of the underlying digital signature scheme: It is ensured by the fact that the sanitizer only receives the signing keys for the blocks he is allowed to modify.

We stress here that our construction does not achieve some less common properties, in particular transparency and unlinkability, and that our security reduction is non-tight. On the other hand, we regard our work as being above all a feasibility result, so that tightness—even though desirable—should not be viewed as essential, and we believe that invisible, non-transparent sanitizable signatures can have interesting applications: One can envision scenarios where it should be impossible to learn which (if any) message blocks have the potential to be altered, but on the other hand it should be clear who signed the document (e.g., for legal and accountability reasons).

\(\mathsf {IND{-}}\mathsf {CCA2}\)-Secure Encryption Implies Strong Invisibility. The noteworthy property of the above construction is that \(\mathsf {IND{-}}\mathsf {CPA}\)-security suffices to achieve (ordinary) invisibility. With a slight technical twist, we interestingly achieve strong invisibility if we now have an \(\mathsf {IND{-}}\mathsf {CCA2}\)-secure encryption scheme: Namely, we include the signer public key in the encryption of \(\mathrm {ADM}\) and the trapdoor information for the sanitizer. Hence, together with our converse construction of \(\mathsf {IND{-}}\mathsf {CCA2}\)-secure encryption from strong invisibility, we also characterize this form of invisibility through public-key encryption.

In light of the strongly invisible construction of Beck et al. [3], which besides an \(\mathsf {IND{-}}\mathsf {CCA2}\)-secure encryption scheme also relies on signature schemes and collision-resistant chameleon hash functions, our solution shows that the former (together with a regular signature scheme) suffices. Of course, the solution by Beck et al. is significantly more efficient.

1.3 Related Work

As mentioned above, sanitizable signature schemes were introduced by Ateniese et al. in their foundational work [2]. The first, and to this date widely adopted security model describing this primitive is due to Brzuska et al. [5], where the authors formalized the unforgeability, immutability, privacy, transparency, and accountability properties of a sanitizable signature scheme with game-based security definitions. Later on, Brzuska et al. added the important unlinkability property [7, 9], as well as non-interactive public accountability [8, 9], to the picture of security notions—see Appendix C in the full version [21] for all the definitions.

Subsequently, the formal framework introduced in [5] came under scrutiny by Gong et al. [25], who pointed out that sanitizable signatures formalized as above were vulnerable to so-called rights-forge attacks. Their solution was to introduce stronger versions of unforgeability, immutability and accountability, which also consider the admissible blocks in the security experiments. Even stronger variants of unforgeability, privacy, transparency, and accountability were later provided by Krenn et al. [30], who decided to also track the signatures in the definitions (in much the same way as for regular signature schemes, when upgrading from “ordinary” to strong unforgeability). Finally, the invisibility property was formalized by Camenisch et al. [10], following ideas already discussed in [2], and recently further strengthened by Beck et al. [3].

The above literature deals with sanitizable signature schemes as they are intended here. On the other hand, we point out that there are many other primitives and extensions that are closely related to, but slightly different from sanitizable signature schemes as treated in this work. Among these there are redactable signatures [4, 16, 18, 28], sanitizable signatures where sanitizer modifications are limited to certain values [11, 19, 29, 34] or where the signer is allowed to add sanitizers after having signed the message [13, 36], sanitizable signatures supporting a multi-signer, multi-sanitizer paradigm [6, 9, 12], or allowing for sanitization of signed, encrypted data [15, 20]. More generally, we note that this whole body of literature falls under the broad category of computing on authenticated data [4, 23, 24]. We refer to the extensive overviews of Ahn et al. [1] and Demirel et al. [17] for further information.

We conclude the related work overview by mentioning that our work also continues a line of research started in [6], where the authors showed that it is possible to construct a sanitizable signature scheme achieving unforgeability, immutability, privacy, and accountability only assuming that arbitrary secure signature schemes exist, i.e. only assuming that one-way functions exist. In this regard, and in light of known separation results of public-key cryptography and one-wayness [27], our work proves that the same does most likely not hold for (strongly) invisible sanitizable signature schemes.

1.4 Organization

In Sect. 2 we outline the syntax of sanitizable signature schemes (and the corresponding specific notation), give an overview of the correctness and security notions, and discuss the invisibility property. In Sect. 3 we show how to construct a public-key bit-encryption scheme from an invisible sanitizable signature scheme, and we prove the corresponding security results, whereas Sect. 4 is devoted to the converse implication. Finally, we draw some conclusions in Sect. 5.

2 Definition of Sanitizable Signatures

2.1 Notation

The starting point of our theoretical discussion on sanitizable signatures is the security model introduced by Brzuska et al. in [5]. However, since invisibility will play a crucial role in our work, their framework has to be slightly adapted. Their approach often relies on the fact that the description \(\mathrm {ADM}\) of admissible parts is recoverable from signatures, in direct contrast to the invisibility property which aims to hide this information. Thus, before we can actually start with the definition of sanitizable signatures, we need to introduce some preliminary notation. In doing so we mainly follow the work of Camenisch et al. [10].

Messages \(m\in \mathcal {M}\) are assumed to consist of a finite number of blocks, each block being an element from a set \(\mathcal {B}\) (usually \(\mathcal {B}\subseteq \{0,1\}^*\)). The message space \(\mathcal {M}\) is thus a subset of \(\mathcal {B}^*\). We use the notation \(m\mathopen {}\left[ i\right] \mathclose {}\) to refer to the i-th block and write \(m=\mathopen {}\left( m\mathopen {}\left[ 1\right] \mathclose {},\dots ,m\mathopen {}\left[ \ell \right] \mathclose {}\right) \mathclose {}\) to stress that the message m consists of \(\ell \) blocks.

Admissible blocks in a message \(m=\mathopen {}\left( m\mathopen {}\left[ 1\right] \mathclose {},\dots ,m\mathopen {}\left[ \ell \right] \mathclose {}\right) \mathclose {}\in \mathcal {M}\) are identified by means of the parameter \(\mathrm {ADM}=\mathopen {}\left( A,l\right) \mathclose {}\in \mathcal {P}\mathopen {}\left( \mathbb {N}\right) \mathclose {}\times \mathbb {N}\) (also called sanitizing rights), where \(l\in \mathbb {N}\) denotes the total number of blocks a message must have, while \(A:=\mathopen {}\left\{ a_1,\dots ,a_j\right\} \mathclose {}\) is the set containing the indices of the blocks the sanitizer is allowed to modify. Of course, here we need \(1\le a_1,\ldots ,a_j\le l\), a condition that we will always assume to be satisfied. We then say that \(\mathrm {ADM}\) matches m if \(\ell =l\), in which case we write \(\mathrm {ADM}\mathopen {}\left( m\right) \mathclose {}=\top \) (otherwise \(\mathrm {ADM}\mathopen {}\left( m\right) \mathclose {}=\bot \)). If \(\mathrm {ADM}_0=\mathopen {}\left( A_0,l\right) \mathclose {}\) and \(\mathrm {ADM}_1=\mathopen {}\left( A_1,l\right) \mathclose {}\) are two sanitizing rights matching m, we define \(\mathrm {ADM}_0\cap \mathrm {ADM}_1:= \mathopen {}\left( A_0\cap A_1,l\right) \mathclose {}\). Similarly, to identify admissible block indices, we write \(a\in \mathrm {ADM}=(A,l)\) if \(1\le a\le l\) and \(a\in A\).

If \(m=\mathopen {}\left( m\mathopen {}\left[ 1\right] \mathclose {},\dots ,m\mathopen {}\left[ \ell \right] \mathclose {}\right) \mathclose {}\in \mathcal {M}\) is a message, the actual modifications to certain blocks made by the sanitizer (i.e., the sanitizing instructions) are identified by means of the parameter \(\mathrm {MOD}=\mathopen {}\left( M,l\right) \mathclose {}\in \mathcal {P}\mathopen {}\left( \mathbb {N}\times \mathcal {B}\right) \mathclose {}\times \mathbb {N}\), where \(l\in \mathbb {N}\) denotes the total number of blocks in a message and \(M:= \{\mathopen {}\left( i_1,\bar{m}_1\right) \mathclose {},\dots ,\mathopen {}\left( i_k,\bar{m}_k\right) \mathclose {}\}\) denotes the set of changes made by the sanitizer. Here \(\mathopen {}\left( i,\bar{m}\right) \mathclose {}\in M\) is intended to mean that the sanitizer will replace block \(m\mathopen {}\left[ i\right] \mathclose {}\) with \(\bar{m}\). Again, here we need \(1\le i_1,\ldots ,i_k\le l\), which we will assume throughout. We then say that \(\mathrm {MOD}\) matches m if \(\ell =l\), in which case we write \(\mathrm {MOD}\mathopen {}\left( m\right) \mathclose {}\) for the message \(m'\) obtained by modifying m according to \(\mathrm {MOD}\), i.e. \(m'=\mathrm {MOD}\mathopen {}\left( m\right) \mathclose {}\) if and only if \(m'=\mathopen {}\left( m'\mathopen {}\left[ 1\right] \mathclose {},\dots m'\mathopen {}\left[ \ell \right] \mathclose {}\right) \mathclose {}\in \mathcal {M}\) and, for every \(1\le i\le \ell \), \(m'\mathopen {}\left[ i\right] \mathclose {}=\bar{m}_i\) if \(i\in \mathopen {}\left\{ i_1,\dots ,i_k\right\} \mathclose {}\), and \(m'\mathopen {}\left[ i\right] \mathclose {}=m\mathopen {}\left[ i\right] \mathclose {}\) otherwise. We write \(\mathrm {MOD}\mathopen {}\left( m\right) \mathclose {}=\bot \) if \(\mathrm {MOD}\) does not match m.

Finally, recall that the sanitizer is supposed to modify only message blocks declared as admissible by the signer. In this regard, the following notation will be useful: If \(\mathrm {ADM}=\mathopen {}\left( A,l_\mathrm {ADM}\right) \mathclose {}\) and \(\mathrm {MOD}=\mathopen {}\left( M,l_\mathrm {MOD}\right) \mathclose {}\) are as above, we say that \(\mathrm {MOD}\) matches (or is valid w.r.t.) \(\mathrm {ADM}\) if \(l_\mathrm {ADM}=l_\mathrm {MOD}\) and \(\widetilde{M}\subseteq A\), where \(\widetilde{M}:= \{i_1,\dots ,i_k\}\) is the set of indices of the blocks which the sanitizer intends to modify (as specified by M). In this case we write \(\mathrm {MOD}\mathopen {}\left( \mathrm {ADM}\right) \mathclose {}=\top \), otherwise \(\mathrm {MOD}\mathopen {}\left( \mathrm {ADM}\right) \mathclose {}=\bot \).

2.2 Definition of Sanitizable Signature Schemes

With the notation introduced above we are now ready to define sanitizable signature schemes. The definition is based on the one given by Brzuska et al. in [5] but takes into account that the sanitizing rights are no longer publicly recoverable from a valid message-signature pair. We remark here that, nonetheless, the sanitizer is always able to learn which message blocks he can modify by trying to sanitize them singularly and checking if the resulting signature is valid, an operation linear in the number of blocks of the message. This is the reason why we do not include \(\mathrm {ADM}\) as an additional input to the \(\mathsf {Sanit}\) algorithm: Either it is implicitly in the signatures or it must be communicated out-of-band.

Since our definition is similar to the one in [10], we give here only a schematic overview of the algorithms comprising a sanitizable signature scheme and their syntax. The complete definition can be found in the full version [21].

Definition 1

A sanitizable signature scheme \(\mathsf {SSS}\) is a tuple of eight probabilistic polynomial-time algorithms \(\mathsf {SSS}:= ({\mathsf {PGen}},\mathsf {KGen}_\mathsf {Sig},\mathsf {KGen}_\mathsf {San},\mathsf {Sign},\mathsf {Sanit},\mathsf {Verify},\mathsf {Proof},\mathsf {Judge})\), whose syntax is as follows:

  • , to generate public parameters;

  • , to generate signing keys;

  • , to generate sanitization keys;

  • , for signatures;

  • , for sanitized signatures;

  • , for verification;

  • , to generate proofs;

  • , to determine who signed the document.

2.3 Correctness and Security Properties of Sanitizable Signature Schemes

We now turn to the definition of correctness and the statement of security properties of a sanitizable signature scheme \(\mathsf {SSS}\). As for correctness, we follow Brzuska et al. [5] and subsequent work and require that the following three properties hold. We give only an informal description here and refer to Appendix B in the full version [21] for complete definitions, as adapted to our framework.

  • Signing Correctness: Every time an honest signer signs a message \(m\in \mathcal {M}\) with sanitizing rights matching m, he produces a valid signature \(\sigma \ne \bot \) such that \(\mathopen {}\left( m,\sigma \right) \mathclose {}\) verifies under the corresponding public keys;

  • Sanitizing Correctness: Every time the intended sanitizer honestly sanitizes a valid message-signature pair \(\mathopen {}\left( m,\sigma \right) \mathclose {}\in \mathcal {M}\times \mathcal {S}\) with sanitizing instructions \(\mathrm {MOD}\) matching the sanitizing rights given to him by the signer, he produces a valid signature \(\sigma '\ne \bot \) such that \(\mathopen {}\left( \mathrm {MOD}\mathopen {}\left( m\right) \mathclose {},\sigma '\right) \mathclose {}\) verifies under the corresponding public keys;

  • Proof Correctness: Every time an honest signer generates a proof regarding a valid message-signature pair, \(\mathsf {Judge}\) identifies the correct accountable party.

Next we discuss the relevant security properties of a sanitizable signature scheme \(\mathsf {SSS}\). Most of these properties were introduced in their basic form by Brzuska et al. in [5] and later in [7, 8]. We will be mainly concerned with their “strong” counterparts as formalized by Krenn et al. in [30] and later adopted by Camenisch et al. [10] and Beck et al. [3]. The definitions we adopt take into account that the sanitizing rights \(\mathrm {ADM}\) (which are no longer assumed to be publicly recoverable from a valid message-signature pair) are an information which needs protection, as work by Gong et al. [25] has shown. In particular, by requiring a sanitizable signature scheme to satisfy the “strong” versions of the unforgeability, signer- and sanitizer-accountability properties, we mostly avoid so-called rights forge attacks as discussed in [25] (for immutability the matter is more delicate—see Appendix C in the full version [21] for further discussions).

We again give only a brief and intuitive description of the security properties here and refer the interested reader to Appendix C in the full version [21] for complete definitions and the corresponding security experiments. Only the notion of invisibility, central to our work, will be discussed here in detail.

  • Unforgeability: No adversary should be able to produce a valid message-signature pair never seen before;

  • Immutability: The sanitizer should be able to modify only message blocks previously declared as admissible by the signer;

  • Privacy: Given a valid, sanitized message-signature pair, no adversary should be able to recover any information about the original content of the sanitized blocks;

  • Transparency: Given a valid message-signature pair, no adversary should be able to determine whether it was the signer or the sanitizer who produced the signature;

  • Signer-Accountability: A malicious signer should not be able to produce a valid message-signature pair \(\mathopen {}\left( m,\sigma \right) \mathclose {}\in \mathcal {M}\times \mathcal {S}\) and a proof which induces \(\mathsf {Judge}\) into erroneously blaming the sanitizer for \(\mathopen {}\left( m,\sigma \right) \mathclose {}\);

  • Sanitizer-Accountability: A malicious sanitizer should not be able to produce a valid message-signature pair \(\mathopen {}\left( m',\sigma '\right) \mathclose {}\in \mathcal {M}\times \mathcal {S}\) such that legitimate proofs generated by the signer induce \(\mathsf {Judge}\) into blaming the signer for \(\mathopen {}\left( m',\sigma '\right) \mathclose {}\);

  • Unlinkability: Given a valid message-signature pair \(\mathopen {}\left( m',\sigma '\right) \mathclose {}\in \mathcal {M}\times \mathcal {S}\) that has been sanitized, no adversary should be able to decide from which known pair \(\mathopen {}\left( m,\sigma \right) \mathclose {}\in \mathcal {M}\times \mathcal {S}\) it originated from;

  • Non-Interactive Public Accountability: The party accountable for a valid message-signature pair can be determined publicly, without the need of any further information. In particular, the \(\mathsf {Proof}\) algorithm is trivial.

2.4 (Strong) Invisibility

Loosely speaking, a sanitizable signature scheme is invisible if, given a valid message-signature pair \(\mathopen {}\left( m,\sigma \right) \mathclose {}\in \mathcal {M}\times \mathcal {S}\), no adversary is able to decide if any specific message block is admissible (i.e., can be modified by the sanitizer) or immutable. This property was first introduced by Ateniese et al. in their foundational work [2] under the name “strong transparency” (an expression later fallen into disuse, not to be confused with the notion of transparency defined in the literature). However, they did not provide any formal definition or construction achieving it. It was later abandoned by Brzuska et al. [5] on the grounds that it appeared to be too strong. Indeed, since they worked under the assumption that \(\mathrm {ADM}\) is always publicly recoverable from a valid signature \(\sigma \ne \bot \) (in obvious conflict with the invisibility notion), it was in fact unachievable. Later on, the invisibility property was considered by Camenisch et al. [10], who defined it formally and gave the first provably secure construction of an invisible sanitizable signature scheme. A stronger version of invisibility was later defined by Beck et al. in [3], where the sanitizer may use his public key with multiple signers.

In the invisibility security experiment, a signer and a sanitizer key pair are generated and a bit is chosen uniformly at random and kept secret. An adversary \(\mathcal {A}\) is given access to an oracle \(\mathcal {O}^\mathsf {LoR}\) which, on input a message and two sanitizing rights \(\mathrm {ADM}_0\), \(\mathrm {ADM}_1\), produces a signature \(\sigma \) (under the signer secret key and the sanitizer public key) making \(\mathrm {ADM}_b\) admissible. In addition, \(\mathcal {A}\) has adaptive access to restricted signing, sanitizing, and proof oracles.

We remark that, in the above experiment, a restricted signing oracle (with fixed sanitizer public key \(\mathsf {pk}_\mathsf {San}\)) can be simulated by querying \(\mathcal {O}^\mathsf {LoR}\) and putting \(\mathrm {ADM}_0=\mathrm {ADM}_1\). Furthermore, for sanitization requests of any message-signature pair \(\mathopen {}\left( m,\sigma \right) \mathclose {}\in \mathcal {M}\times \mathcal {S}\) with , \(\mathcal {A}\) must be limited to modifications matching \(\mathrm {ADM}_0\cap \mathrm {ADM}_1\) in order to avoid trivial attacks exposing b. This is why all queries to and answers from \(\mathcal {O}^\mathsf {LoR}\), together with the allowed sanitizing rights \(\mathrm {ADM}_0\cap \mathrm {ADM}_1\), are recorded in a “whitelist” W: Whenever \(\mathcal {A}\) queries \(\mathcal {O}^\mathsf {Sanit}\), the oracle goes through the list W of previously signed messages, to see which blocks the adversary is indeed allowed to modify. If the query is accepted, the whitelist has to be updated to also include the new (sanitized) message-signature pair, with the same sanitizing rights as the original pair (this has to be done because a sanitized message could be sanitized again). In the basic invisibility property the answers are only computed for the given key \(\mathsf {pk}_\mathsf {Sig}\).

The adversary’s goal is to guess b, i.e., to decide which set of blocks the oracle \(\mathcal {O}^\mathsf {LoR}\) has made admissible. The scheme \(\mathsf {SSS}\) is invisible if no efficient adversary as above succeeds in doing so with probability significantly greater than 1 / 2.

We observe that the definition of invisibility already has the flavor of the “strong” variant of the definitions given in [10, 30], in that we always keep track of the signatures in the whitelist W. On the other hand, the main drawback of this definition is that it is not possible to query the sanitization oracle for keys different from the challenge ones. As remarked by Beck et al. [3], this may have undesirable consequences: \(\mathcal {A}\) could pose as another signer and, as soon as he gets access to a sanitization oracle, could potentially learn the bit b.

To address these concerns (and to give a definition of invisibility that also protects against dishonest signers), one can allow queries to the sanitization oracle with public keys chosen by the adversary \(\mathcal {A}\). This approach leads to the definition of strong invisibility. The main difference between the invisibility and the strong invisibility experiments is that the adversary is allowed oracle queries to \(\tilde{\mathcal {O}}^\mathsf {LoR}\) and \(\tilde{\mathcal {O}}^\mathsf {Sanit}\) with adversarially chosen public keys. A sanitizable signature scheme secure in this stronger sense does not suffer from the flaw mentioned above. As a side effect, the signing oracle derived from \(\tilde{\mathcal {O}}^\mathsf {LoR}\) is no longer restricted. The formal definition of (strong) invisibility is given in the full version [21].

3 Invisible Sanitizable Signatures Imply Public-Key Encryption Schemes

In this section we show how to construct a public-key bit-encryption scheme from an invisible sanitizable signature scheme.

3.1 Construction

Suppose that Alice wants to send a secret bit \(b\in \{0,1\}\) to Bob, without an adversary \(\mathcal {A}\) being able to learn it. To do so, Bob publicly chooses a sanitizable signature scheme \(\mathsf {SSS}\) and a security parameter \(\lambda \in \mathbb {N}\), and generates a tuple of public parameters . Observe that the block set \(\mathcal {B}\) defined by \(\mathsf {pp}\) clearly must contain at least two elements—we will assume that \(\{0,1\}\subseteq \mathcal {B}\), but for other block sets the adjustment is straightforward. Moreover, we assume that the two-block-messages (0, 0), (1, 0), (0, 1) belong to the message space \(\mathcal {M}\), but again our construction can be easily modified should this not be the case.

Bob then generates a sanitizer key pair , and chooses a message \(m\in \mathcal {M}\) consisting of two blocks, e.g. \(m=\mathopen {}\left( 0,0\right) \mathclose {}\). He sends \((\mathsf {pp},m,\mathsf {pk}_\mathsf {San})\) to Alice over an unprotected channel, while \(\mathsf {sk}_\mathsf {San}\) is kept secret.

Upon receiving \((\mathsf {pp},m,\mathsf {pk}_\mathsf {San})\), Alice runs to generate a signer key pair. Now, depending on whether she wants to encrypt \(b=0\) or \(b=1\), she signs the message m declaring as admissible the first or the second block, respectively. She then sends the signature \(\sigma \) and her public key \(\mathsf {pk}_\mathsf {Sig}\) to Bob, while \(\mathsf {sk}_\mathsf {Sig}\) is kept secret.

Upon receiving \((\sigma ,\mathsf {pk}_\mathsf {Sig})\), Bob tries to separately modify the first and the second message block by replacing it with ‘1’. He thus sets \(\mathrm {MOD}_0\leftarrow \mathopen {}\left( \mathopen {}\left\{ \mathopen {}\left( 1,1\right) \mathclose {}\right\} \mathclose {},2\right) \mathclose {}\) and \(\mathrm {MOD}_1\leftarrow \mathopen {}\left( \mathopen {}\left\{ \mathopen {}\left( 2,1\right) \mathclose {}\right\} \mathclose {},2\right) \mathclose {}\) and then computes and .

Now, assuming that \(\mathsf {SSS}\) is sanitizing correct and immutable, exactly one of the two signatures computed by Bob will be valid. If Alice has encrypted \(b=0\), then \(\sigma _0'\) will be valid with overwhelming probability (because of the sanitizing correctness property), while \(\sigma _1'\) will be either invalid or equal to \(\bot \) with very high probability (because \(\mathsf {SSS}\) is immutable). On the other hand, if Alice has chosen \(b=1\), then \(\sigma _1'\) will be valid and \(\sigma _0'\) not by the same argument. In the unlikely event that both signatures are valid or both are invalid, Bob cannot decrypt the message sent by Alice.

We thus conclude that Bob is able to correctly decrypt the bit encrypted by Alice with very high probability by sanitizing m twice and checking the signatures (or error messages). Moreover, if we also assume \(\mathsf {SSS}\) to be invisible, then any adversary \(\mathcal {A}\) will be able to learn b only with negligible probability. In fact, from an outsider’s perspective learning b is equivalent to establishing which message block is admissible, which is highly unlikely by the invisibility assumption.

We now turn to a more rigorous definition of our public-key bit-encryption scheme, as well as to the statement of the correctness and security properties.

Construction 1

Let \(\mathsf {SSS}:= ({\mathsf {PGen}},\mathsf {KGen}_\mathsf {Sig},\mathsf {KGen}_\mathsf {San},\mathsf {Sign},\mathsf {Sanit},\mathsf {Verify},\mathsf {Proof},\mathsf {Judge})\) be a sanitizable signature scheme. We define a public-key bit-encryption scheme \(\varPi \) as in Fig. 1.

Fig. 1.
figure 1

Public-key bit-encryption scheme from an invisible sanitizable signature scheme

3.2 \(\mathsf {IND{-}}\mathsf {CPA}\)-Security

We now formally state our security results about the public-key bit encryption scheme in Construction 1.

Theorem 2

Let \(\mathsf {SSS}:= ({\mathsf {PGen}},\mathsf {KGen}_\mathsf {Sig},\mathsf {KGen}_\mathsf {San},\mathsf {Sign},\mathsf {Sanit},\mathsf {Verify},\mathsf {Proof},\mathsf {Judge})\) be a sanitizable signature scheme, and let \(\varPi := (\mathsf {PGen},\mathsf {KGen},\mathsf {Enc},\mathsf {Dec})\) be the public-key bit-encryption scheme defined in Construction 1. If \(\mathsf {SSS}\) is sanitizing correct, immutable and invisible, then \(\varPi \) is correct and \(\mathsf {IND{-}}\mathsf {CPA}\)-secure.

The proof gives a tight reduction in terms of the advantages: For any adversary \(\mathcal {A}\) playing the \(\mathsf {IND{-}}\mathsf {CPA}\)-game we construct an adversary \(\mathcal {B}\) against the invisibility game with roughly the same running time as \(\mathcal {A}\), such that

$$ \mathbf {Adv}^{\mathsf {IND{-}}\mathsf {CPA}}_{\mathcal {A},\varPi }(\lambda ) = \mathbf {Adv}^{\mathsf {Inv}}_{\mathcal {B},\mathsf {SSS}}(\lambda ). $$

Note that we need the immutability property only to bound the correctness error by \(2\cdot \mathbf {Adv}^{\mathsf {Imm}}_{\mathcal {C},\mathsf {SSS}}(\lambda )\) for some efficient adversary \(\mathcal {C}\) against the immutability game. The proof of Theorem 2 can be found in the full version [21].

3.3 \(\mathsf {IND{-}}\mathsf {CCA2}\)-Security

We next argue that the scheme above achieves \(\mathsf {IND{-}}\mathsf {CCA2}\)-security if \(\mathsf {SSS}\) is assumed to be strongly invisible. Recall that the difference to regular invisibility is that now the adversary against strong invisibility can also make left-or-right signature requests for \((m,\mathsf {pk}_\mathsf {San}',\mathrm {ADM}_0,\mathrm {ADM}_1)\) with different sanitizer public keys \(\mathsf {pk}_\mathsf {San}'\ne \mathsf {pk}_\mathsf {San}\), and sanitization requests for \((m,\sigma ,\mathsf {pk}_\mathsf {Sig}',\mathrm {MOD})\) with different signer public keys \(\mathsf {pk}_\mathsf {Sig}'\ne \mathsf {pk}_\mathsf {Sig}\). Interestingly, for our construction and proof we only rely on the latter property.

For the security proof we also need strong unforgeability of the sanitizable signature scheme. The reason is that ciphertexts are of the form \((\sigma ,\mathsf {pk}_\mathsf {Sig})\), and the \(\mathsf {IND{-}}\mathsf {CCA2}\)-adversary may ask for decryptions of the form \((\sigma ',\mathsf {pk}_\mathsf {Sig})\) where it alters the signature component for the same message. This would allow to break the security of the encryption scheme easily.

Theorem 3

Let \(\mathsf {SSS}:= ({\mathsf {PGen}},\mathsf {KGen}_\mathsf {Sig},\mathsf {KGen}_\mathsf {San},\mathsf {Sign},\mathsf {Sanit},\mathsf {Verify},\mathsf {Proof},\mathsf {Judge})\) be a sanitizable signature scheme, and let \(\varPi := (\mathsf {PGen},\mathsf {KGen},\mathsf {Enc},\mathsf {Dec})\) be the public-key bit-encryption scheme defined in Construction 1. If \(\mathsf {SSS}\) is sanitizing correct, strongly unforgeable, immutable and strongly invisible, then \(\varPi \) is correct and \(\mathsf {IND{-}}\mathsf {CCA2}\)-secure.

The proof also gives a tight reduction in terms of the advantages: For any adversary \(\mathcal {A}\) playing the \(\mathsf {IND{-}}\mathsf {CCA2}\)-game we construct adversaries \(\mathcal {B}\) and \(\mathcal {C}\) with roughly the same running time as \(\mathcal {A}\), such that

$$ \mathbf {Adv}^{\mathsf {IND{-}}\mathsf {CCA2}}_{\mathcal {A},\varPi }(\lambda )\le \mathbf {Adv}^{\mathsf {SInv}}_{\mathcal {B},\mathsf {SSS}}(\lambda ) + 2\cdot \mathbf {Adv}^{\mathsf {SUnf}}_{\mathcal {C},\mathsf {SSS}}(\lambda ). $$

In fact, for \(\mathsf {IND{-}}\mathsf {CCA1}\)-security regular unforgeability is sufficient. Once more, we need immutability only to bound the correctness error. The proof of Theorem 3 can be found in the full version [21].

4 Public-Key Encryption Implies Invisible Sanitizable Signatures

In this section we present our construction of an invisible sanitizable signature scheme, starting from a secure public-key encryption scheme.

4.1 Construction

Our construction based on public-key encryption follows the established encode-and-sign paradigm and exploits the idea of using chameleon hash functions and signing the hash values with a regular signature scheme \(\varSigma \) (see, e.g., [2, 5]). The sanitizer can then find collisions for the hashes with the help of his trapdoor key, allowing him to modify the message. Here, instead of chameleon hashes we use the signature scheme \(\varSigma \) itself.

In our scheme, signatures consist of two parts: the “message” part ensures the basic unforgeability and accountability properties, and can be created by either of the two parties. In contrast, the “administrative” part contains the information needed by the sanitizer to perform modifications, and can be created only by the signer. Parts of the administrative information are encrypted under an encryption scheme \(\varPi \) under the sanitizer’s public key, to ensure invisibility.

To begin with, the signer generates a key pair \((\mathsf {pk}_\varSigma ,\mathsf {sk}_\varSigma )\) for \(\varSigma \), while the sanitizer creates keys \((\mathsf {pk}'_\varSigma ,\mathsf {sk}'_\varSigma )\) and \((\mathsf {pk}_\varPi ,\mathsf {sk}_\varPi )\) for \(\varSigma \) and \(\varPi \), respectively. To sign a message \(m=\mathopen {}\left( m\mathopen {}\left[ 1\right] \mathclose {},\dots ,m\mathopen {}\left[ \ell \right] \mathclose {}\right) \mathclose {}\), the signer generates a new key pair \((\mathsf {pk}^i_\varSigma ,\mathsf {sk}^i_\varSigma )\) \((1\le i\le \ell )\) for each block of m, signs every block with the corresponding key, and creates a tuple of signatures \(S:= (\sigma ^1,\dots ,\sigma ^\ell )\). He then generates a signature \(\sigma _\mathrm {MSG}\) of the message \((0,m,S,\mathsf {pk}_\mathsf {Sig},\mathsf {pk}_\mathsf {San})\) under \(\mathsf {sk}_\varSigma \). Here, m and S are signed so that they are protected from modification by outsiders, whereas \(\mathsf {pk}_\mathsf {Sig}\), \(\mathsf {pk}_\mathsf {San}\) and the initial bit ‘0’ are included for technical reasons (namely, domain separation). The “message” part of the final signature \(\sigma \) then consists of \((S,\sigma _\mathrm {MSG})\).

The first part of the signature must now be complemented with the information required to sanitize the admissible parts of the message, and to verify the signature. To this end, the signer generates the tuple \(K_\mathrm {ADM}=(\mathsf {sk}^{i_1}_\varSigma ,\mathsf {sk}^{i_2}_\varSigma ,\dots )\) containing the secret keys of the admissible blocks \(i_j\in \mathrm {ADM}\) (properly padded to ensure a length-invariant encoding), and encloses it for the sanitizing party via encryption under \(\mathsf {pk}_\varPi \). In addition, we also hide the parameter \(\mathrm {ADM}\) (to ensure invisibility) and the signer public key (in foresight of the strongly invisible version of our result) in this encryption. In summary, the signer creates an encryption C of \((\mathsf {pk}_\mathsf {San},K_\mathrm {ADM},\mathrm {ADM})\) under \(\mathsf {pk}_\varPi \) and then, in order to prevent changes in these administrative data, creates a (regular) signature \(\sigma _\mathrm {ADM}\) of the message \((1,\mathsf {pk}_\mathsf {San},V,C)\). Here, \(V:= (\mathsf {pk}^1_\varSigma ,\dots ,\mathsf {pk}^l_\varSigma )\) contains the verification keys for the single blocks, and again the initial bit ‘1’ is included for domain separation reasons. The “administrative” part of the signature is then \((V,C,\sigma _\mathrm {ADM})\), and the final signature is \(\sigma := (S,\sigma _\mathrm {MSG},V,C,\sigma _\mathrm {ADM})\).

If the sanitizer receives a signature \(\sigma \) for a message m, he first checks the validity of the signatures S, \(\sigma _\mathrm {MSG}\) and \(\sigma _\mathrm {ADM}\), and recovers \(\mathrm {ADM}\) and the corresponding signing keys \(K_\mathrm {ADM}\) by decrypting C. Then, given valid sanitizing instructions \(m'=\mathrm {MOD}(m)\), he can update the “message” part of \(\sigma \), leaving the “administrative” part unchanged. He obtains \(S'\) by substituting the relevant entries in S with new signatures of the modified blocks under the corresponding keys \(K_\mathrm {ADM}\), and updates \(\sigma '_\mathrm {MSG}\) by re-signing \((0,m',S',\mathsf {pk}_\mathsf {Sig},\mathsf {pk}_\mathsf {San})\) under \(\mathsf {sk}'_\varSigma \). Finally, the sanitized signature for \(m'\) is given by \(\sigma '=(S',\sigma '_\mathrm {MSG},V,C,\sigma _\mathrm {ADM})\).

Immutability of the scheme is achieved by the fact that the sanitizer does not know the secret keys for the blocks he is not supposed to modify, and therefore cannot obtain suitable replacements for signatures in S. Observe that the signature \(\sigma _\mathrm {MSG}\) immediately ensures public accountability, since it serves as a proof of who put the overall signature. This also implies that our scheme does not achieve transparency. For technical reasons it neither supports unlinkability.

Fig. 2.
figure 2

Invisible sanitizable signature scheme from a public-key encryption scheme: parameter generation, signer and sanitizer key generation, and signing algorithms.

Remark 1

The above discussion presumes that some mild assumptions on \(\varSigma \) and \(\varPi \) are satisfied, which we will henceforth assume to be in place. In particular, all signature keys must be of fixed length L (this can be achieved via padding of the keys), and the message blocks, as well as the tuples of the form \((0,m,S,\mathsf {pk}_\mathsf {Sig},\mathsf {pk}_\mathsf {San})\) and \((1,\mathsf {pk}_\mathsf {San},V,C)\), must lie in the message space of \(\varSigma \) (this is no restriction, because the signatures constructed in [33, 35] support messages of arbitrary polynomial length). Also, \(\mathrm {ADM}\) must be encoded in a length-invariant manner, and tuples of the form \((\mathsf {pk}_\mathsf {San},K_\mathrm {ADM},\mathrm {ADM})\) must lie in the message space of \(\varPi \) (which can be achieved through hybrid encryption).

We now turn to a more rigorous definition of our sanitizable signature scheme, as well as to the statement of the correctness and security results.

Construction 4

Let \(\varSigma := (\mathsf {PGen},\mathsf {KGen},\mathsf {Sign},\mathsf {Verify})\) be a signature scheme and \(\varPi := (\mathsf {PGen},\mathsf {KGen},\mathsf {Enc},\mathsf {Dec})\) a public-key encryption scheme. We define a sanitizable signature scheme \(\mathsf {SSS}\) as in Figs. 2 and 3 above.

Fig. 3.
figure 3

Invisible sanitizable signature scheme from a public-key encryption scheme: verification, sanitization, judge and proof algorithms.

4.2 Security

The formal security statement for our construction is given in Theorem 5. Its proof can be found in the full version [21].

Theorem 5

If the signature scheme \(\varSigma \) is correct and unforgeable, and the encryption scheme \(\varPi \) is correct, then the sanitizable signature scheme \(\mathsf {SSS}\) in Construction 4 is correct. If \(\varSigma \) is unforgeable and \(\varPi \) is \(\mathsf {IND{-}}\mathsf {CPA}\)-secure, then \(\mathsf {SSS}\) is unforgeable, immutable, private, publicly accountable, and invisible.

4.3 Achieving Strong Invisibility

In the previous sections we have shown that invisibility is equivalent to \(\mathsf {IND{-}}\mathsf {CPA}\)-secure encryption, and that strong invisibility implies \(\mathsf {IND{-}}\mathsf {CCA2}\)-secure encryption. Here we show that the latter implication also holds in the other direction: If we use an \(\mathsf {IND{-}}\mathsf {CCA2}\)-secure encryption scheme in our construction, then we get a strongly invisible sanitizable signature scheme.

Theorem 6

If the signature scheme \(\varSigma \) is correct and unforgeable, and the encryption scheme \(\varPi \) is correct, then the sanitizable signature scheme \(\mathsf {SSS}\) in Construction 4 is correct. If \(\varSigma \) is unforgeable and \(\varPi \) is \(\mathsf {IND{-}}\mathsf {CCA2}\)-secure, then \(\mathsf {SSS}\) is unforgeable, immutable, private, publicly accountable, and strongly invisible.

The proof of Theorem 6 can be found in the full version [21].

5 Conclusions

Our results show that building invisible sanitizable signature schemes from one-way functions alone is presumably hard, since deriving public-key encryption from one-wayness in a black-box way is infeasible [27]. This is in contrast to sanitizable schemes without the invisibility and transparency properties. Namely, Brzuska et al. [6] gave a simple construction of a non-invisible, non-transparent scheme based on regular signature schemes only.

An interesting open question concerns the minimal assumptions required to achieve transparency for sanitizable signatures, independently of the question regarding invisibility. It is possible to achieve all the common security properties, except for transparency (and except for invisibility, of course), using one-way functions alone [6, 9]. Current constructions achieving transparency are based on assumptions seemingly stronger than one-way functions, such as group signature schemes [7], zero-knowledge proofs [22], or (chameleon) hash functions [3, 10]. Finally, for a sanitizable signature scheme to be both transparent and invisible, public-key encryption is at least necessary, as discussed here.