Keywords

1 Introduction

Deniable public-key encryption (DPKE) is a primitive that allows a sender to successfully lie about which plaintext message was originally encrypted. In particular, suppose that Alice encrypts a plaintext m under some public key, using randomness r, to give ciphertext c which she sends to Bob. At some point in the future – perhaps Bob falls under suspicion – Alice is coerced to reveal the message she encrypted, together with the randomness she used. DPKE allows Alice to claim that she sent \(m^*\), by providing \(r^*\) such that \(\textsf{enc}(m^*, r^*)=\textsf{enc}(m, r)\). Beyond its immediate use case, deniable encryption finds applications in electronic voting, where deniability allows voters to cast their ballots without coercion and prevents vote-buying, as well as in secure multiparty computation.

The adversarial model for deniable encryption assumes strong adversaries that can coerce individuals to reveal messages they encrypted; it is thus reasonable to consider other advanced capabilities, such as the ability to subvert algorithms. Powerful adversaries can insert unreliability into cryptography via external (‘real-world’) infrastructure: whether by influencing standards bodies to adopt ‘backdoored’ parameters, inserting exploitable errors into software implementations, or compromising supply chains to interfere with hardware. The Snowden revelations showed that this is indeed the case; see the survey by Schneier et al. [13] which provides a broad overview of cryptographic subversion, with case studies of known subversion attempts.

Prior work considering subversion has usually had the aim of exfiltrating secret keys (in the context of symmetric encryption and digital signatures). Berndt and Liśkiewicz [5] show that a generic ASA against an encryption scheme can only embed a limited number of bits per ciphertext. More concretely, they show that no universal and consistentFootnote 1 ASA is able to embed more than \(\log (\kappa )\) bits of information into a single ciphertext in the random oracle model [5, Theorem 1.4], where \(\kappa \) is the key length of the encryption scheme. In the setting of symmetric key encryption, this is sufficient to successfully leak the secret key over multiple ciphertexts [2, 4]. However, for asymmetric primitives, subverting ciphertexts to leak the encryption key makes little sense as it is public; leaking plaintext messages is not possible due to the limited bandwidth. Thus for generic ASAs against PKE, the best possible adversarial goal is to exfiltrate sufficient information to compromise confidentiality – knowledge of one bit of the underlying plaintext is sufficient for an adversary to break confidentiality in the sense of IND-CPA or IND$Footnote 2. But as Bellare et al. [4] argue, this is not an attractive goal for a mass surveillance adversary, who would rather recover plaintext messages.

Contributions. In this article we sketch out an argument to show that ASAs against DPKE schemes present an attractive opportunity for an adversary. We refer the reader to the extended version [3] for full details and an expanded argument. In this article, we begin by recalling notions of ASAs, including adversarial goals (undetectability and information exfiltration), and give an example of a generic ASA technique (rejection sampling). We then recall DPKE notions, including the formal definition, before applying ASA definitions to DPKE. This allows us to present our main contribution, namely a description of how an ASA against DPKE could be successfully realised to undermine deniability. In brief: an adversary who can subvert a DPKE scheme can transmit a commitment to the underlying plaintext using a subliminal channel. Later, the adversary can check the commitment against the message that the sender claims was sent.

Our work is the first to consider subverting deniable encryptionFootnote 3, and we establish formal models of the adversarial goals as well as security notions for such an attack. In the extended version, we also consider how to mitigate ASAs against deniable encryption.

2 Notions of Subversion Attacks

We consider subversions of cryptographic schemes implementing encrypted communication between two parties. Abstractly, we consider a scheme \(\mathrm {\Pi }=(\mathrm {\Pi }.\textsf{gen},\{\mathrm {\Pi }.\textsf{S}^{(i)}\}_{0\le i< n},\mathrm {\Pi }.\textsf{R})\) consisting of three components: a key generation algorithm, together with a collection of \(n\in \mathbb {N}_{>0}\) sender algorithms and a receiver algorithm \(\mathrm {\Pi }.\textsf{R}\) representing decryption. We let \(\mathrm {\Pi }.\textsf{S}^{(0)}\) represent encryption and write \(\mathrm {\Pi }.\textsf{S}:=\mathrm {\Pi }.\textsf{S}^{(0)}\). Our abstract treatment allows us to capture both PKE schemes and symmetric encryption (setting \(k_\textsf{S}=k_\textsf{R}\)). We give a generic syntax to the scheme \(\mathrm {\Pi }\) as follows: Key generation \(\mathrm {\Pi }.\textsf{gen}\) outputs a key pair \((k_\textsf{S},k_\textsf{R})\in \mathcal {K}_\textsf{S}\times \mathcal {K}_\textsf{R}\). Each sender algorithm \(\mathrm {\Pi }.\textsf{S}^{(i)}\), for \({0\le i<n}\), has associated randomness space \(\mathcal {R}^{(i)}\) together with input and output spaces \(\mathcal {X}^{(i)},\mathcal {Y}^{(i)}\) (respectively) and takes as input a sender key \(k_\textsf{S}\in \mathcal {K}_\textsf{S}\) \(x\in \mathcal {X}^{(i)}\), outputting \(y\in \mathcal {Y}^{(i)}\); we write \(\mathcal {X}:=\mathcal {X}^{(0)},\mathcal {Y}:=\mathcal {Y}^{(0)}\). We note that \(\mathcal {X}\subsetneq \mathcal {X'}\); in particular, \(\bot \in \mathcal {X'}\setminus \mathcal {X}\). The receiver algorithm takes as input a receiver key \(k_\textsf{R}\in \mathcal {K}_\textsf{R}\) and \(y\in \mathcal {Y}\), outputting \(x\in \mathcal {X'}\); the special symbol \(\bot \) is used to indicate failure. Lastly, we foreground the randomness used during encryption in our notation by writing \(y\leftarrow \mathrm {\Pi }.\textsf{S}(k_\textsf{S},x;r)\) for some randomness space \(\mathcal {R}\) where we split the input space accordingly \(\mathcal {X}\cong \tilde{\mathcal {X}}\times \mathcal {R}\); dropping the last input is equivalent to \({r\leftarrow _{\scriptscriptstyle \$}\mathcal {R}}\). This allows us to discuss particular values of r that arise during encryption.

Undetectable Subversion. In a nutshell, a subversion is undetectable if distinguishers with black-box access to either the original scheme or to its subverted variant cannot tell the two apart. A subversion should exhibit a dedicated functionality for the subverting party, but simultaneously be undetectable for all others. This apparent contradiction is resolved by parameterising the subverted algorithm with a secret subversion key, knowledge of which enables the extra functionality. We denote the subversion key space with \(\mathcal {I}_\textsf{S}\).

Formally: a subversion of the sender algorithm \(\mathrm {\Pi }.\textsf{S}\) of a cryptographic scheme consists of a finite index space \(\mathcal {I}_{\textsf{S}}\) and a family \(\mathcal {S}=\{\textsf{S}_i\}_{i\in \mathcal {I}_\textsf{S}}\) of algorithms such that for all \(i\in \mathcal {I}_{\textsf{S}}\) the algorithm \(\mathrm {\Pi }.\textsf{S}_i\) can syntactically replace the algorithm \(\mathrm {\Pi }.\textsf{S}\). As a security property we also require that the observable behaviour of \(\mathrm {\Pi }.\textsf{S}\) and \(\mathrm {\Pi }.\textsf{S}_i\) be effectively identical (for uniformly chosen \(i\in \mathcal {I}_\textsf{S}\)). This is formalised via the games \(\textrm{UDS}^0, \textrm{UDS}^1\) in Fig. 1(left). For any adversary \(\mathcal {A}\) we define the advantage \( \textsf{Adv}^{\textrm{uds}}_{\mathrm {\Pi }}(\mathcal {A}):=\left|\Pr [\textrm{UDS}^1(\mathcal {A})]-\Pr [\textrm{UDS}^0(\mathcal {A})]\right|\) and say that family \(\mathcal {S}\) undetectably subverts algorithm \(\mathrm {\Pi }.\textsf{S}\) if \(\textsf{Adv}^{\textrm{uds}}_{\textsf{PKE}}(\mathcal {A})\) is negligibly small for all realistic \(\mathcal {A}\).

Subliminal Information Exfiltration. Abstractly, the aim of an adversary is to exfiltrate some subliminal information. In the context of prior work considering symmetric encryption, this information typically represents the secret key. We formalise this goal as the \(\textsf{MR}\) game in Fig. 1(centre), which assumes a passive attack in which the adversary eavesdrops on communication, observing the transmitted ciphertexts. We allow the adversary some influence over sender inputs, with the aim of closely modelling real-world settings. This influence on the sender inputs \(x\) is restricted by assuming a stateful ‘message sampler’ algorithm \(\textrm{MS}\) that produces the inputs to \(\mathrm {\Pi }.\textsf{S}\) used throughout the gameFootnote 4. For any message sampler \(\textrm{MS}\) and adversary \(\mathcal {A}\) we define the advantage \( \textsf{Adv}^{\textsf{mr}}_{\mathrm {\Pi },\textrm{MS}}(\mathcal {A}):=\Pr [\textsf{MR}(\mathcal {A})]. \) We say that subversion family \(\mathcal {S}\) is key recovering for passive attackers if for all practical \(\textrm{MS}\) there exists a realistic adversary \(\mathcal {A}\) such that \(\textsf{Adv}^{\textsf{mr}}_{\mathrm {\Pi },\textrm{MS}}(\mathcal {A})\) reaches a considerable value (e.g., 0.1)Footnote 5.

Fig. 1.
figure 1

Left: Game \(\textrm{UDS}\) modelling sender subversion undetectability for a scheme \(\mathrm {\Pi }\). Centre: Game \(\textsf{MR}\) modelling key recoverability for passive adversaries. Right: rejection sampling subversion \(\mathrm {\Pi }.\textsf{S}_i\) of encryption algorithm \(\mathrm {\Pi }.\textsf{S}\) and corresponding message recovering adversary \(\mathcal {A}\).

Generic Method: Rejection Sampling. As an example, we describe a generic method to embed a subliminal message \(\mu \) with \(\left|\mu \right|=\ell _\mu \) into ciphertexts of an encryption scheme \(\mathrm {\Pi }.\textsf{S}\). Essentially, when computing a ciphertext, the subverted algorithm uses rejection sampling to choose randomness that results in a ciphertext encoding the subliminal message. The subverted encryption algorithm \(\mathrm {\Pi }.\textsf{S}_i\) of a scheme \(\mathrm {\Pi }\) is given in Fig. 1(right) together with the corresponding message recovery adversary \(\mathcal {A}\). Subversion \(\mathrm {\Pi }.\textsf{S}_i\) is parameterised by a large index space \(\mathcal {I}\), a constant \(\ell _\mu \) and a PRF \(F_i\). For the PRF we require that it be a family of functions \(F_i:\mathcal {Y}\rightarrow \{0,1\}^{\ell _\mu }\) (that is: a pseudo-random mapping from the ciphertext space to the set strings of length \(\ell _\mu \)).

We note that the subverted encryption algorithm \(\mathrm {\Pi }.\textsf{S}_i\) will resample randomness \(2^{\ell _\mu }\) times on average. This means that longer messages result in exponentially slower running times of the algorithm; in practice, this means that the attack is limited to short messages (a few bits at most).

3 Deniable Public Key Encryption

DPKE allows a sender to lie about the messages that were encrypted. In particular, suppose that a user encrypts message m to obtain c which is sent to the recipient. DPKE allows the sender to choose a different message \(m^*\) and reveal fake randomness \(r^*\) which explains c as the encryption of \(m^*\). Notice that this necessarily implies that the scheme cannot be perfectly correct as \(\textsf{dec}(\textsf{enc}(m^*, r^*)) = m\). This counter-intuitive observation is resolved by noticing that for a given message m, there are ‘sparse trigger’ values \({r_i}\) such that encrypting m with an \(r_i\) results in an incorrect ciphertext. Deniable public-key encryption schemes rely on the fact that finding such \(r_i\) should be easy with some trapdoor knowledge, and hard otherwise.

In this article we focus on non-interactive sender deniable public-key encryption, as introduced by Canetti et al. [6], who showed that a sender-deniable scheme can be used to construct receiver deniable (and thus bi-deniable) schemes. To date, no practical deniable scheme has been proposed. Either deniability is not practically achievable (a typical example is Canetti et al.’s scheme [6] whose ciphertexts grow inversely proportional to the deniability probability), or else the construction requires strong assumptions such as iO or functional encryption [7, 10, 12]. Recent work by Agrawal et al. [1] is promising in this regard, as their construction provides compact ciphertexts and is based on the security of Learning with Errors. Nevertheless, they require a running time that is inversely proportional to detection probability.

DPKE Definition. A DPKE scheme \(\textsf{DE}= (\textsf{DE}.\textsf{gen},\textsf{DE}.\textsf{enc},\textsf{DE}.\textsf{dec},\textsf{DE}.\textsf{Fake})\) consists of a tuple of algorithms together with key spaces \(\mathcal {K}_\textsf{S},\mathcal {K}_\textsf{R}\), randomness space \(\mathcal {R}\), a message space \(\mathcal {M}\) and a ciphertext space \(\mathcal {C}\).

  • The key-generation algorithm \(\textsf{DE}.\textsf{gen}\) returns a pair \(( pk , sk )\in \mathcal {K}_\textsf{S}\times \mathcal {K}_\textsf{R}\) consisting of a public key and a private key.

  • The encryption algorithm \(\textsf{DE}.\textsf{enc}\) takes a public key \( pk \), randomness \(r\in \mathcal {R}\) and a message \(m\in \mathcal {M}\) to produce a ciphertext \(c\in \mathcal {C}\).

  • The decryption algorithm \(\textsf{DE}.\textsf{dec}\) takes a private key \( sk \) and a ciphertext \(c\in \mathcal {C}\), and outputs either a message \(m\in \mathcal {M}\) or the rejection symbol \(\bot \notin \mathcal {M}\).

  • The faking algorithm \(\textsf{DE}.\textsf{Fake}\) takes public key \( pk \), a pair of message and randomness mr and fake message \(m^*\), and outputs faking randomness \({r^*\in \mathcal {R}}\).

A scheme \(\textsf{DE}\) is correct and secure if the key generation, encryption and decryption algorithms considered as a PKE scheme \((\textsf{DE}.\textsf{gen},\textsf{DE}.\textsf{enc},\textsf{DE}.\textsf{dec})\) satisfy the standard notions of correctness and IND-CPA security properties of public-key encryption. We formalise the deniability of the scheme via the game \(\textsf{INDEXP}\), the details of which are given in the extended paper and described briefly here. Essentially, the \(\textsf{INDEXP}\) game is an indistinguishability game in which a distinguisher must choose between two cases: \(\textsf{INDEXP}^0\) represents the adversary’s view of an honest encryption of \(m^*\); \(\textsf{INDEXP}^1\) represents the adversary’s view when the sender lies about the underlying plaintext. The corresponding advantage is, for any distinguisher \(\mathcal {A}\), given by

$$ \textsf{Adv}^{\textsf{indexp}}_{\textsf{DE}}(\mathcal {A}):=\left|\Pr [\textsf{INDEXP}^0(\mathcal {A})]-\Pr [\textsf{INDEXP}^1(\mathcal {A})]\right|. $$

Formal Definition of Subverted DPKE. We note that a DPKE scheme satisfies the generic syntax introduced above in Sect. 2, with key generation algorithm \(\mathrm {\Pi }.\textsf{gen}=\textsf{DE}.\textsf{gen}\), sender algorithms \((\mathrm {\Pi }.\textsf{S}_0,\mathrm {\Pi }.\textsf{S}_1)=(\textsf{DE}.\textsf{enc},\textsf{DE}.\textsf{Fake})\) and receiver algorithm \(\mathrm {\Pi }.\textsf{R}=\textsf{DE}.\textsf{dec}\). We may thus apply the generic notions of subversion and undetectability introduced in Sect. 2. In the extended version, we furthermore consider the game \(\textsf{subINDEXP}\) modelling the adversary’s ability to compromise the deniability property of a subverted scheme.

4 Subverting DPKE

Now that we have introduced the notions of ASAs and DPKE, we are ready to discuss ASAs against DPKE. As we set out in the introduction, the idea is for the subverted DPKE scheme to commit to the actual message encrypted; this undermines the ability of the sender to later claim that they sent a different message. The most obvious approach is to subvert the scheme so that the randomness commits to the message. This way, when Alice is coerced by the adversary to reveal her message and randomness, the adversary is able to test whether this is the case. This is a feasible attack route and applies generically to any deniable encryption scheme. When Alice claims that she sent \(m^*\), by providing \(r^*\) such that \(\textsf{enc}(m^*, r^*)=\textsf{enc}(m, r)\), she would need \(r^*\) to commit to the message. This should be hard, as long as the commitment is provided by a cryptographically secure digital signature or even a MAC (with the authentication key hidden from Alice). This generic ASA applies to all DPKE schemes, as the security definition for DPKE requires Alice to produce explanatory randomness when coerced.

A second technique is to use subversion techniques (such as the rejection sampling approach given as an example in Sect. 2) to embed a subliminal channel in ciphertexts, such that the channel transmits a commitment to the message. The generic rejection sampling technique is unable to provide enough bandwidth to transmit sufficiently long signatures to prevent Alice forging the commitment, however non-generic techniques may be possible depending on the particular scheme and instantiation. Furthermore, we note that it is a feature of most proposed deniable encryption schemes that a large amount of randomness is consumed in the course of encryption, and that this randomness is sampled in chunks. This means that if the algorithms are considered in a non-black box fashion, then rejection sampling could potentially be used against each chunk of randomness resulting in a sufficiently large subliminal channel.

Lastly, there is another subversion approach that at first glance seems appealing, but which turns out to be unworkable; namely, to target the faking algorithm. A subverted faking algorithm \(\textsf{DE}.\textsf{Fake}_i( pk ,m,r,m^*)\) could output subverted \(r^*\) which alerts the adversary to the fact that \(m^*, r^*\) are fake; for example, if \(r^*\) commits to the real message m. However, this fake randomness \(r^*\) still needs to be convincing from the point of view of the deniability of the scheme – the scheme’s security properties should be maintained by the subversion, otherwise a detector playing the \(\textrm{UDS}\) game will be able to tell that the algorithm is subverted. In particular, \(r^*\) should satisfy \(\textsf{DE}.\textsf{enc}( pk ,m^*,r^*)=c\). However, for a DPKE scheme there is no reason why this should hold for an arbitrary value of \(r^*\). This approach does not seem to be workable without adding considerable structure to the subverted scheme that means it would be easily detectedFootnote 6.

5 Conclusions

Deniable communication is a subtle concept and it is unclear what it should mean ‘in the real world’. Intuitively, the notion is clear: deniability should allow Alice to plausibly evade incrimination when communicating. However, the adversarial model and evaluation of real world protocols claiming deniability is not agreed upon (should Alice be able to claim that she did not participant in a particular communication?). Celi and Symeonidis [8] give an overview of the current state of play and a discussion of open problems. Deniable encryption is one particular primitive whose definition is widely agreed upon in the literature and for which the applications are clear (including in e-voting, multi-party computation and to protect against coercion). The threat model for deniable encryption usually considers an adversary who is willing to coerce users; in this work we extend the model to consider adversaries who also undermine deniability by using subversion attacks. This seems a reasonable additional assumption to make of an adversary who is willing to engage in coercion. We hope that our work helps to elucidate some of the issues involved in designing deniable schemes and refine the threat model for deniable encryption.